Jan 21 22:44:22 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 21 22:44:22 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 21 22:44:22 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 21 22:44:22 localhost kernel: BIOS-provided physical RAM map:
Jan 21 22:44:22 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 21 22:44:22 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 21 22:44:22 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 21 22:44:22 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 21 22:44:22 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 21 22:44:22 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 21 22:44:22 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 21 22:44:22 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 21 22:44:22 localhost kernel: NX (Execute Disable) protection: active
Jan 21 22:44:22 localhost kernel: APIC: Static calls initialized
Jan 21 22:44:22 localhost kernel: SMBIOS 2.8 present.
Jan 21 22:44:22 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 21 22:44:22 localhost kernel: Hypervisor detected: KVM
Jan 21 22:44:22 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 21 22:44:22 localhost kernel: kvm-clock: using sched offset of 3297372785 cycles
Jan 21 22:44:22 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 21 22:44:22 localhost kernel: tsc: Detected 2800.000 MHz processor
Jan 21 22:44:22 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 21 22:44:22 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 21 22:44:22 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 21 22:44:22 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 21 22:44:22 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 21 22:44:22 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 21 22:44:22 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 21 22:44:22 localhost kernel: Using GB pages for direct mapping
Jan 21 22:44:22 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 21 22:44:22 localhost kernel: ACPI: Early table checksum verification disabled
Jan 21 22:44:22 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 21 22:44:22 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 22:44:22 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 22:44:22 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 22:44:22 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 21 22:44:22 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 22:44:22 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 22:44:22 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 21 22:44:22 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 21 22:44:22 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 21 22:44:22 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 21 22:44:22 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 21 22:44:22 localhost kernel: No NUMA configuration found
Jan 21 22:44:22 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 21 22:44:22 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 21 22:44:22 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 21 22:44:22 localhost kernel: Zone ranges:
Jan 21 22:44:22 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 21 22:44:22 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 21 22:44:22 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 21 22:44:22 localhost kernel:   Device   empty
Jan 21 22:44:22 localhost kernel: Movable zone start for each node
Jan 21 22:44:22 localhost kernel: Early memory node ranges
Jan 21 22:44:22 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 21 22:44:22 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 21 22:44:22 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 21 22:44:22 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 21 22:44:22 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 21 22:44:22 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 21 22:44:22 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 21 22:44:22 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 21 22:44:22 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 21 22:44:22 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 21 22:44:22 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 21 22:44:22 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 21 22:44:22 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 21 22:44:22 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 21 22:44:22 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 21 22:44:22 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 21 22:44:22 localhost kernel: TSC deadline timer available
Jan 21 22:44:22 localhost kernel: CPU topo: Max. logical packages:   8
Jan 21 22:44:22 localhost kernel: CPU topo: Max. logical dies:       8
Jan 21 22:44:22 localhost kernel: CPU topo: Max. dies per package:   1
Jan 21 22:44:22 localhost kernel: CPU topo: Max. threads per core:   1
Jan 21 22:44:22 localhost kernel: CPU topo: Num. cores per package:     1
Jan 21 22:44:22 localhost kernel: CPU topo: Num. threads per package:   1
Jan 21 22:44:22 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 21 22:44:22 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 21 22:44:22 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 21 22:44:22 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 21 22:44:22 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 21 22:44:22 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 21 22:44:22 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 21 22:44:22 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 21 22:44:22 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 21 22:44:22 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 21 22:44:22 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 21 22:44:22 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 21 22:44:22 localhost kernel: Booting paravirtualized kernel on KVM
Jan 21 22:44:22 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 21 22:44:22 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 21 22:44:22 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 21 22:44:22 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 21 22:44:22 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 21 22:44:22 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 21 22:44:22 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 21 22:44:22 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 21 22:44:22 localhost kernel: random: crng init done
Jan 21 22:44:22 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 21 22:44:22 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 21 22:44:22 localhost kernel: Fallback order for Node 0: 0 
Jan 21 22:44:22 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 21 22:44:22 localhost kernel: Policy zone: Normal
Jan 21 22:44:22 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 21 22:44:22 localhost kernel: software IO TLB: area num 8.
Jan 21 22:44:22 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 21 22:44:22 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 21 22:44:22 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 21 22:44:22 localhost kernel: Dynamic Preempt: voluntary
Jan 21 22:44:22 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 21 22:44:22 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 21 22:44:22 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 21 22:44:22 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 21 22:44:22 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 21 22:44:22 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 21 22:44:22 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 21 22:44:22 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 21 22:44:22 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 21 22:44:22 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 21 22:44:22 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 21 22:44:22 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 21 22:44:22 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 21 22:44:22 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 21 22:44:22 localhost kernel: Console: colour VGA+ 80x25
Jan 21 22:44:22 localhost kernel: printk: console [ttyS0] enabled
Jan 21 22:44:22 localhost kernel: ACPI: Core revision 20230331
Jan 21 22:44:22 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 21 22:44:22 localhost kernel: x2apic enabled
Jan 21 22:44:22 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 21 22:44:22 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 21 22:44:22 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 21 22:44:22 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 21 22:44:22 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 21 22:44:22 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 21 22:44:22 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 21 22:44:22 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 21 22:44:22 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 21 22:44:22 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 21 22:44:22 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 21 22:44:22 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 21 22:44:22 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 21 22:44:22 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 21 22:44:22 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 21 22:44:22 localhost kernel: x86/bugs: return thunk changed
Jan 21 22:44:22 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 21 22:44:22 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 21 22:44:22 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 21 22:44:22 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 21 22:44:22 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 21 22:44:22 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 21 22:44:22 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 21 22:44:22 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 21 22:44:22 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 21 22:44:22 localhost kernel: landlock: Up and running.
Jan 21 22:44:22 localhost kernel: Yama: becoming mindful.
Jan 21 22:44:22 localhost kernel: SELinux:  Initializing.
Jan 21 22:44:22 localhost kernel: LSM support for eBPF active
Jan 21 22:44:22 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 21 22:44:22 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 21 22:44:22 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 21 22:44:22 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 21 22:44:22 localhost kernel: ... version:                0
Jan 21 22:44:22 localhost kernel: ... bit width:              48
Jan 21 22:44:22 localhost kernel: ... generic registers:      6
Jan 21 22:44:22 localhost kernel: ... value mask:             0000ffffffffffff
Jan 21 22:44:22 localhost kernel: ... max period:             00007fffffffffff
Jan 21 22:44:22 localhost kernel: ... fixed-purpose events:   0
Jan 21 22:44:22 localhost kernel: ... event mask:             000000000000003f
Jan 21 22:44:22 localhost kernel: signal: max sigframe size: 1776
Jan 21 22:44:22 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 21 22:44:22 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 21 22:44:22 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 21 22:44:22 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 21 22:44:22 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 21 22:44:22 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 21 22:44:22 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 21 22:44:22 localhost kernel: node 0 deferred pages initialised in 9ms
Jan 21 22:44:22 localhost kernel: Memory: 7763740K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618364K reserved, 0K cma-reserved)
Jan 21 22:44:22 localhost kernel: devtmpfs: initialized
Jan 21 22:44:22 localhost kernel: x86/mm: Memory block size: 128MB
Jan 21 22:44:22 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 21 22:44:22 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 21 22:44:22 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 21 22:44:22 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 21 22:44:22 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 21 22:44:22 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 21 22:44:22 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 21 22:44:22 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 21 22:44:22 localhost kernel: audit: type=2000 audit(1769035460.375:1): state=initialized audit_enabled=0 res=1
Jan 21 22:44:22 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 21 22:44:22 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 21 22:44:22 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 21 22:44:22 localhost kernel: cpuidle: using governor menu
Jan 21 22:44:22 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 21 22:44:22 localhost kernel: PCI: Using configuration type 1 for base access
Jan 21 22:44:22 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 21 22:44:22 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 21 22:44:22 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 21 22:44:22 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 21 22:44:22 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 21 22:44:22 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 21 22:44:22 localhost kernel: Demotion targets for Node 0: null
Jan 21 22:44:22 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 21 22:44:22 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 21 22:44:22 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 21 22:44:22 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 21 22:44:22 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 21 22:44:22 localhost kernel: ACPI: Interpreter enabled
Jan 21 22:44:22 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 21 22:44:22 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 21 22:44:22 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 21 22:44:22 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 21 22:44:22 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 21 22:44:22 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 21 22:44:22 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [3] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [4] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [5] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [6] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [7] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [8] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [9] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [10] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [11] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [12] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [13] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [14] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [15] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [16] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [17] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [18] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [19] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [20] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [21] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [22] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [23] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [24] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [25] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [26] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [27] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [28] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [29] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [30] registered
Jan 21 22:44:22 localhost kernel: acpiphp: Slot [31] registered
Jan 21 22:44:22 localhost kernel: PCI host bridge to bus 0000:00
Jan 21 22:44:22 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 21 22:44:22 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 21 22:44:22 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 21 22:44:22 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 21 22:44:22 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 21 22:44:22 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 21 22:44:22 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 21 22:44:22 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 21 22:44:22 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 21 22:44:22 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 21 22:44:22 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 21 22:44:22 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 21 22:44:22 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 21 22:44:22 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 21 22:44:22 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 21 22:44:22 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 21 22:44:22 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 21 22:44:22 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 21 22:44:22 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 21 22:44:22 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 21 22:44:22 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 21 22:44:22 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 21 22:44:22 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 21 22:44:22 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 21 22:44:22 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 21 22:44:22 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 21 22:44:22 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 21 22:44:22 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 21 22:44:22 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 21 22:44:22 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 21 22:44:22 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 21 22:44:22 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 21 22:44:22 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 21 22:44:22 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 21 22:44:22 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 21 22:44:22 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 21 22:44:22 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 21 22:44:22 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 21 22:44:22 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 21 22:44:22 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 21 22:44:22 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 21 22:44:22 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 21 22:44:22 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 21 22:44:22 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 21 22:44:22 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 21 22:44:22 localhost kernel: iommu: Default domain type: Translated
Jan 21 22:44:22 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 21 22:44:22 localhost kernel: SCSI subsystem initialized
Jan 21 22:44:22 localhost kernel: ACPI: bus type USB registered
Jan 21 22:44:22 localhost kernel: usbcore: registered new interface driver usbfs
Jan 21 22:44:22 localhost kernel: usbcore: registered new interface driver hub
Jan 21 22:44:22 localhost kernel: usbcore: registered new device driver usb
Jan 21 22:44:22 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 21 22:44:22 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 21 22:44:22 localhost kernel: PTP clock support registered
Jan 21 22:44:22 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 21 22:44:22 localhost kernel: NetLabel: Initializing
Jan 21 22:44:22 localhost kernel: NetLabel:  domain hash size = 128
Jan 21 22:44:22 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 21 22:44:22 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 21 22:44:22 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 21 22:44:22 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 21 22:44:22 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 21 22:44:22 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 21 22:44:22 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 21 22:44:22 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 21 22:44:22 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 21 22:44:22 localhost kernel: vgaarb: loaded
Jan 21 22:44:22 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 21 22:44:22 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 21 22:44:22 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 21 22:44:22 localhost kernel: pnp: PnP ACPI init
Jan 21 22:44:22 localhost kernel: pnp 00:03: [dma 2]
Jan 21 22:44:22 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 21 22:44:22 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 21 22:44:22 localhost kernel: NET: Registered PF_INET protocol family
Jan 21 22:44:22 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 21 22:44:22 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 21 22:44:22 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 21 22:44:22 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 21 22:44:22 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 21 22:44:22 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 21 22:44:22 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 21 22:44:22 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 21 22:44:22 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 21 22:44:22 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 21 22:44:22 localhost kernel: NET: Registered PF_XDP protocol family
Jan 21 22:44:22 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 21 22:44:22 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 21 22:44:22 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 21 22:44:22 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 21 22:44:22 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 21 22:44:22 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 21 22:44:22 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 21 22:44:22 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 21 22:44:22 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 97125 usecs
Jan 21 22:44:22 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 21 22:44:22 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 21 22:44:22 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 21 22:44:22 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 21 22:44:22 localhost kernel: ACPI: bus type thunderbolt registered
Jan 21 22:44:22 localhost kernel: Initialise system trusted keyrings
Jan 21 22:44:22 localhost kernel: Key type blacklist registered
Jan 21 22:44:22 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 21 22:44:22 localhost kernel: zbud: loaded
Jan 21 22:44:22 localhost kernel: integrity: Platform Keyring initialized
Jan 21 22:44:22 localhost kernel: integrity: Machine keyring initialized
Jan 21 22:44:22 localhost kernel: Freeing initrd memory: 87956K
Jan 21 22:44:22 localhost kernel: NET: Registered PF_ALG protocol family
Jan 21 22:44:22 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 21 22:44:22 localhost kernel: Key type asymmetric registered
Jan 21 22:44:22 localhost kernel: Asymmetric key parser 'x509' registered
Jan 21 22:44:22 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 21 22:44:22 localhost kernel: io scheduler mq-deadline registered
Jan 21 22:44:22 localhost kernel: io scheduler kyber registered
Jan 21 22:44:22 localhost kernel: io scheduler bfq registered
Jan 21 22:44:22 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 21 22:44:22 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 21 22:44:22 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 21 22:44:22 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 21 22:44:22 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 21 22:44:22 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 21 22:44:22 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 21 22:44:22 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 21 22:44:22 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 21 22:44:22 localhost kernel: Non-volatile memory driver v1.3
Jan 21 22:44:22 localhost kernel: rdac: device handler registered
Jan 21 22:44:22 localhost kernel: hp_sw: device handler registered
Jan 21 22:44:22 localhost kernel: emc: device handler registered
Jan 21 22:44:22 localhost kernel: alua: device handler registered
Jan 21 22:44:22 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 21 22:44:22 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 21 22:44:22 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 21 22:44:22 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 21 22:44:22 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 21 22:44:22 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 21 22:44:22 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 21 22:44:22 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 21 22:44:22 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 21 22:44:22 localhost kernel: hub 1-0:1.0: USB hub found
Jan 21 22:44:22 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 21 22:44:22 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 21 22:44:22 localhost kernel: usbserial: USB Serial support registered for generic
Jan 21 22:44:22 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 21 22:44:22 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 21 22:44:22 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 21 22:44:22 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 21 22:44:22 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 21 22:44:22 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 21 22:44:22 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 21 22:44:22 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-21T22:44:21 UTC (1769035461)
Jan 21 22:44:22 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 21 22:44:22 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 21 22:44:22 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 21 22:44:22 localhost kernel: usbcore: registered new interface driver usbhid
Jan 21 22:44:22 localhost kernel: usbhid: USB HID core driver
Jan 21 22:44:22 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 21 22:44:22 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 21 22:44:22 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 21 22:44:22 localhost kernel: Initializing XFRM netlink socket
Jan 21 22:44:22 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 21 22:44:22 localhost kernel: Segment Routing with IPv6
Jan 21 22:44:22 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 21 22:44:22 localhost kernel: mpls_gso: MPLS GSO support
Jan 21 22:44:22 localhost kernel: IPI shorthand broadcast: enabled
Jan 21 22:44:22 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 21 22:44:22 localhost kernel: AES CTR mode by8 optimization enabled
Jan 21 22:44:22 localhost kernel: sched_clock: Marking stable (1271003564, 144568449)->(1491618468, -76046455)
Jan 21 22:44:22 localhost kernel: registered taskstats version 1
Jan 21 22:44:22 localhost kernel: Loading compiled-in X.509 certificates
Jan 21 22:44:22 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 21 22:44:22 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 21 22:44:22 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 21 22:44:22 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 21 22:44:22 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 21 22:44:22 localhost kernel: Demotion targets for Node 0: null
Jan 21 22:44:22 localhost kernel: page_owner is disabled
Jan 21 22:44:22 localhost kernel: Key type .fscrypt registered
Jan 21 22:44:22 localhost kernel: Key type fscrypt-provisioning registered
Jan 21 22:44:22 localhost kernel: Key type big_key registered
Jan 21 22:44:22 localhost kernel: Key type encrypted registered
Jan 21 22:44:22 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 21 22:44:22 localhost kernel: Loading compiled-in module X.509 certificates
Jan 21 22:44:22 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 21 22:44:22 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 21 22:44:22 localhost kernel: ima: No architecture policies found
Jan 21 22:44:22 localhost kernel: evm: Initialising EVM extended attributes:
Jan 21 22:44:22 localhost kernel: evm: security.selinux
Jan 21 22:44:22 localhost kernel: evm: security.SMACK64 (disabled)
Jan 21 22:44:22 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 21 22:44:22 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 21 22:44:22 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 21 22:44:22 localhost kernel: evm: security.apparmor (disabled)
Jan 21 22:44:22 localhost kernel: evm: security.ima
Jan 21 22:44:22 localhost kernel: evm: security.capability
Jan 21 22:44:22 localhost kernel: evm: HMAC attrs: 0x1
Jan 21 22:44:22 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 21 22:44:22 localhost kernel: Running certificate verification RSA selftest
Jan 21 22:44:22 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 21 22:44:22 localhost kernel: Running certificate verification ECDSA selftest
Jan 21 22:44:22 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 21 22:44:22 localhost kernel: clk: Disabling unused clocks
Jan 21 22:44:22 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 21 22:44:22 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 21 22:44:22 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 21 22:44:22 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 21 22:44:22 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 21 22:44:22 localhost kernel: Run /init as init process
Jan 21 22:44:22 localhost kernel:   with arguments:
Jan 21 22:44:22 localhost kernel:     /init
Jan 21 22:44:22 localhost kernel:   with environment:
Jan 21 22:44:22 localhost kernel:     HOME=/
Jan 21 22:44:22 localhost kernel:     TERM=linux
Jan 21 22:44:22 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 21 22:44:22 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 21 22:44:22 localhost systemd[1]: Detected virtualization kvm.
Jan 21 22:44:22 localhost systemd[1]: Detected architecture x86-64.
Jan 21 22:44:22 localhost systemd[1]: Running in initrd.
Jan 21 22:44:22 localhost systemd[1]: No hostname configured, using default hostname.
Jan 21 22:44:22 localhost systemd[1]: Hostname set to <localhost>.
Jan 21 22:44:22 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 21 22:44:22 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 21 22:44:22 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 21 22:44:22 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 21 22:44:22 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 21 22:44:22 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 21 22:44:22 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 21 22:44:22 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 21 22:44:22 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 21 22:44:22 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 21 22:44:22 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 21 22:44:22 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 21 22:44:22 localhost systemd[1]: Reached target Local File Systems.
Jan 21 22:44:22 localhost systemd[1]: Reached target Path Units.
Jan 21 22:44:22 localhost systemd[1]: Reached target Slice Units.
Jan 21 22:44:22 localhost systemd[1]: Reached target Swaps.
Jan 21 22:44:22 localhost systemd[1]: Reached target Timer Units.
Jan 21 22:44:22 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 21 22:44:22 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 21 22:44:22 localhost systemd[1]: Listening on Journal Socket.
Jan 21 22:44:22 localhost systemd[1]: Listening on udev Control Socket.
Jan 21 22:44:22 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 21 22:44:22 localhost systemd[1]: Reached target Socket Units.
Jan 21 22:44:22 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 21 22:44:22 localhost systemd[1]: Starting Journal Service...
Jan 21 22:44:22 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 21 22:44:22 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 21 22:44:22 localhost systemd[1]: Starting Create System Users...
Jan 21 22:44:22 localhost systemd[1]: Starting Setup Virtual Console...
Jan 21 22:44:22 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 21 22:44:22 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 21 22:44:22 localhost systemd[1]: Finished Create System Users.
Jan 21 22:44:22 localhost systemd-journald[307]: Journal started
Jan 21 22:44:22 localhost systemd-journald[307]: Runtime Journal (/run/log/journal/31160826614146dca546fae3354f7966) is 8.0M, max 153.6M, 145.6M free.
Jan 21 22:44:22 localhost systemd-sysusers[312]: Creating group 'users' with GID 100.
Jan 21 22:44:22 localhost systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Jan 21 22:44:22 localhost systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 21 22:44:22 localhost systemd[1]: Started Journal Service.
Jan 21 22:44:22 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 21 22:44:22 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 21 22:44:22 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 21 22:44:22 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 21 22:44:22 localhost systemd[1]: Finished Setup Virtual Console.
Jan 21 22:44:22 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 21 22:44:22 localhost systemd[1]: Starting dracut cmdline hook...
Jan 21 22:44:22 localhost dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Jan 21 22:44:22 localhost dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 21 22:44:22 localhost systemd[1]: Finished dracut cmdline hook.
Jan 21 22:44:22 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 21 22:44:22 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 21 22:44:22 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 21 22:44:22 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 21 22:44:22 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 21 22:44:22 localhost kernel: RPC: Registered udp transport module.
Jan 21 22:44:22 localhost kernel: RPC: Registered tcp transport module.
Jan 21 22:44:22 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 21 22:44:22 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 21 22:44:22 localhost rpc.statd[443]: Version 2.5.4 starting
Jan 21 22:44:22 localhost rpc.statd[443]: Initializing NSM state
Jan 21 22:44:22 localhost rpc.idmapd[448]: Setting log level to 0
Jan 21 22:44:22 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 21 22:44:23 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 21 22:44:23 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Jan 21 22:44:23 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 21 22:44:23 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 21 22:44:23 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 21 22:44:23 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 21 22:44:23 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 21 22:44:23 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 21 22:44:23 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 21 22:44:23 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 21 22:44:23 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 21 22:44:23 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 21 22:44:23 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 21 22:44:23 localhost systemd[1]: Reached target Network.
Jan 21 22:44:23 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 21 22:44:23 localhost systemd[1]: Starting dracut initqueue hook...
Jan 21 22:44:23 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 21 22:44:23 localhost systemd[1]: Reached target System Initialization.
Jan 21 22:44:23 localhost systemd[1]: Reached target Basic System.
Jan 21 22:44:23 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 21 22:44:23 localhost kernel: libata version 3.00 loaded.
Jan 21 22:44:23 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 21 22:44:23 localhost kernel: scsi host0: ata_piix
Jan 21 22:44:23 localhost kernel: scsi host1: ata_piix
Jan 21 22:44:23 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 21 22:44:23 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 21 22:44:23 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 21 22:44:23 localhost kernel:  vda: vda1
Jan 21 22:44:23 localhost kernel: ata1: found unknown device (class 0)
Jan 21 22:44:23 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 21 22:44:23 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 21 22:44:23 localhost systemd-udevd[474]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 22:44:23 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 21 22:44:23 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 21 22:44:23 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 21 22:44:23 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 21 22:44:23 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 21 22:44:23 localhost systemd[1]: Reached target Initrd Root Device.
Jan 21 22:44:23 localhost systemd[1]: Finished dracut initqueue hook.
Jan 21 22:44:23 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 21 22:44:23 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 21 22:44:23 localhost systemd[1]: Reached target Remote File Systems.
Jan 21 22:44:23 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 21 22:44:23 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 21 22:44:23 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 21 22:44:23 localhost systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Jan 21 22:44:23 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 21 22:44:23 localhost systemd[1]: Mounting /sysroot...
Jan 21 22:44:24 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 21 22:44:24 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 21 22:44:24 localhost kernel: XFS (vda1): Ending clean mount
Jan 21 22:44:24 localhost systemd[1]: Mounted /sysroot.
Jan 21 22:44:24 localhost systemd[1]: Reached target Initrd Root File System.
Jan 21 22:44:24 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 21 22:44:24 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 21 22:44:24 localhost systemd[1]: Reached target Initrd File Systems.
Jan 21 22:44:24 localhost systemd[1]: Reached target Initrd Default Target.
Jan 21 22:44:24 localhost systemd[1]: Starting dracut mount hook...
Jan 21 22:44:24 localhost systemd[1]: Finished dracut mount hook.
Jan 21 22:44:24 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 21 22:44:24 localhost rpc.idmapd[448]: exiting on signal 15
Jan 21 22:44:24 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 21 22:44:24 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 21 22:44:24 localhost systemd[1]: Stopped target Network.
Jan 21 22:44:24 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 21 22:44:24 localhost systemd[1]: Stopped target Timer Units.
Jan 21 22:44:24 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 21 22:44:24 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 21 22:44:24 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 21 22:44:24 localhost systemd[1]: Stopped target Basic System.
Jan 21 22:44:24 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 21 22:44:24 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 21 22:44:24 localhost systemd[1]: Stopped target Path Units.
Jan 21 22:44:24 localhost systemd[1]: Stopped target Remote File Systems.
Jan 21 22:44:24 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 21 22:44:24 localhost systemd[1]: Stopped target Slice Units.
Jan 21 22:44:24 localhost systemd[1]: Stopped target Socket Units.
Jan 21 22:44:24 localhost systemd[1]: Stopped target System Initialization.
Jan 21 22:44:24 localhost systemd[1]: Stopped target Local File Systems.
Jan 21 22:44:24 localhost systemd[1]: Stopped target Swaps.
Jan 21 22:44:24 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Stopped dracut mount hook.
Jan 21 22:44:24 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 21 22:44:24 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 21 22:44:24 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 21 22:44:24 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 21 22:44:24 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 21 22:44:24 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 21 22:44:24 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 21 22:44:24 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 21 22:44:24 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 21 22:44:24 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 21 22:44:24 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 21 22:44:24 localhost systemd[1]: systemd-udevd.service: Consumed 1.070s CPU time.
Jan 21 22:44:24 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 21 22:44:24 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Closed udev Control Socket.
Jan 21 22:44:24 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Closed udev Kernel Socket.
Jan 21 22:44:24 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 21 22:44:24 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 21 22:44:24 localhost systemd[1]: Starting Cleanup udev Database...
Jan 21 22:44:24 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 21 22:44:24 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 21 22:44:24 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Stopped Create System Users.
Jan 21 22:44:24 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 21 22:44:24 localhost systemd[1]: Finished Cleanup udev Database.
Jan 21 22:44:24 localhost systemd[1]: Reached target Switch Root.
Jan 21 22:44:24 localhost systemd[1]: Starting Switch Root...
Jan 21 22:44:24 localhost systemd[1]: Switching root.
Jan 21 22:44:24 localhost systemd-journald[307]: Journal stopped
Jan 21 22:44:25 localhost systemd-journald[307]: Received SIGTERM from PID 1 (systemd).
Jan 21 22:44:25 localhost kernel: audit: type=1404 audit(1769035464.743:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 21 22:44:25 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 22:44:25 localhost kernel: SELinux:  policy capability open_perms=1
Jan 21 22:44:25 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 22:44:25 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 21 22:44:25 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 22:44:25 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 22:44:25 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 22:44:25 localhost kernel: audit: type=1403 audit(1769035464.899:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 21 22:44:25 localhost systemd[1]: Successfully loaded SELinux policy in 161.147ms.
Jan 21 22:44:25 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.619ms.
Jan 21 22:44:25 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 21 22:44:25 localhost systemd[1]: Detected virtualization kvm.
Jan 21 22:44:25 localhost systemd[1]: Detected architecture x86-64.
Jan 21 22:44:25 localhost systemd-rc-local-generator[634]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 22:44:25 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 21 22:44:25 localhost systemd[1]: Stopped Switch Root.
Jan 21 22:44:25 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 21 22:44:25 localhost systemd[1]: Created slice Slice /system/getty.
Jan 21 22:44:25 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 21 22:44:25 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 21 22:44:25 localhost systemd[1]: Created slice User and Session Slice.
Jan 21 22:44:25 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 21 22:44:25 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 21 22:44:25 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 21 22:44:25 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 21 22:44:25 localhost systemd[1]: Stopped target Switch Root.
Jan 21 22:44:25 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 21 22:44:25 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 21 22:44:25 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 21 22:44:25 localhost systemd[1]: Reached target Path Units.
Jan 21 22:44:25 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 21 22:44:25 localhost systemd[1]: Reached target Slice Units.
Jan 21 22:44:25 localhost systemd[1]: Reached target Swaps.
Jan 21 22:44:25 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 21 22:44:25 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 21 22:44:25 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 21 22:44:25 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 21 22:44:25 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 21 22:44:25 localhost systemd[1]: Listening on udev Control Socket.
Jan 21 22:44:25 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 21 22:44:25 localhost systemd[1]: Mounting Huge Pages File System...
Jan 21 22:44:25 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 21 22:44:25 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 21 22:44:25 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 21 22:44:25 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 21 22:44:25 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 21 22:44:25 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 21 22:44:25 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 21 22:44:25 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 21 22:44:25 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 21 22:44:25 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 21 22:44:25 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 21 22:44:25 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 21 22:44:25 localhost systemd[1]: Stopped Journal Service.
Jan 21 22:44:25 localhost systemd[1]: Starting Journal Service...
Jan 21 22:44:25 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 21 22:44:25 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 21 22:44:25 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 21 22:44:25 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 21 22:44:25 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 21 22:44:25 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 21 22:44:25 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 21 22:44:25 localhost systemd-journald[675]: Journal started
Jan 21 22:44:25 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 21 22:44:25 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 21 22:44:25 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 21 22:44:25 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 21 22:44:25 localhost systemd[1]: Started Journal Service.
Jan 21 22:44:25 localhost systemd[1]: Mounted Huge Pages File System.
Jan 21 22:44:25 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 21 22:44:25 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 21 22:44:25 localhost kernel: fuse: init (API version 7.37)
Jan 21 22:44:25 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 21 22:44:25 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 21 22:44:25 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 21 22:44:25 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 21 22:44:25 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 21 22:44:25 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 21 22:44:25 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 21 22:44:25 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 21 22:44:25 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 21 22:44:25 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 21 22:44:25 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 21 22:44:25 localhost kernel: ACPI: bus type drm_connector registered
Jan 21 22:44:25 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 21 22:44:25 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 21 22:44:25 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 21 22:44:25 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 21 22:44:25 localhost systemd[1]: Starting Create System Users...
Jan 21 22:44:25 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 21 22:44:25 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 21 22:44:25 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 21 22:44:25 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 21 22:44:25 localhost systemd-journald[675]: Received client request to flush runtime journal.
Jan 21 22:44:25 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 21 22:44:25 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 21 22:44:25 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 21 22:44:25 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 21 22:44:25 localhost systemd[1]: Mounting FUSE Control File System...
Jan 21 22:44:25 localhost systemd[1]: Finished Create System Users.
Jan 21 22:44:25 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 21 22:44:25 localhost systemd[1]: Mounted FUSE Control File System.
Jan 21 22:44:25 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 21 22:44:25 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 21 22:44:25 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 21 22:44:25 localhost systemd[1]: Reached target Local File Systems.
Jan 21 22:44:25 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 21 22:44:25 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 21 22:44:25 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 21 22:44:25 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 21 22:44:25 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 21 22:44:25 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 21 22:44:25 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 21 22:44:25 localhost bootctl[695]: Couldn't find EFI system partition, skipping.
Jan 21 22:44:25 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 21 22:44:25 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 21 22:44:25 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 21 22:44:25 localhost systemd[1]: Starting Security Auditing Service...
Jan 21 22:44:25 localhost systemd[1]: Starting RPC Bind...
Jan 21 22:44:25 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 21 22:44:25 localhost auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 21 22:44:25 localhost auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 21 22:44:26 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 21 22:44:26 localhost systemd[1]: Started RPC Bind.
Jan 21 22:44:26 localhost augenrules[707]: /sbin/augenrules: No change
Jan 21 22:44:26 localhost augenrules[722]: No rules
Jan 21 22:44:26 localhost augenrules[722]: enabled 1
Jan 21 22:44:26 localhost augenrules[722]: failure 1
Jan 21 22:44:26 localhost augenrules[722]: pid 702
Jan 21 22:44:26 localhost augenrules[722]: rate_limit 0
Jan 21 22:44:26 localhost augenrules[722]: backlog_limit 8192
Jan 21 22:44:26 localhost augenrules[722]: lost 0
Jan 21 22:44:26 localhost augenrules[722]: backlog 0
Jan 21 22:44:26 localhost augenrules[722]: backlog_wait_time 60000
Jan 21 22:44:26 localhost augenrules[722]: backlog_wait_time_actual 0
Jan 21 22:44:26 localhost augenrules[722]: enabled 1
Jan 21 22:44:26 localhost augenrules[722]: failure 1
Jan 21 22:44:26 localhost augenrules[722]: pid 702
Jan 21 22:44:26 localhost augenrules[722]: rate_limit 0
Jan 21 22:44:26 localhost augenrules[722]: backlog_limit 8192
Jan 21 22:44:26 localhost augenrules[722]: lost 0
Jan 21 22:44:26 localhost augenrules[722]: backlog 1
Jan 21 22:44:26 localhost augenrules[722]: backlog_wait_time 60000
Jan 21 22:44:26 localhost augenrules[722]: backlog_wait_time_actual 0
Jan 21 22:44:26 localhost augenrules[722]: enabled 1
Jan 21 22:44:26 localhost augenrules[722]: failure 1
Jan 21 22:44:26 localhost augenrules[722]: pid 702
Jan 21 22:44:26 localhost augenrules[722]: rate_limit 0
Jan 21 22:44:26 localhost augenrules[722]: backlog_limit 8192
Jan 21 22:44:26 localhost augenrules[722]: lost 0
Jan 21 22:44:26 localhost augenrules[722]: backlog 2
Jan 21 22:44:26 localhost augenrules[722]: backlog_wait_time 60000
Jan 21 22:44:26 localhost augenrules[722]: backlog_wait_time_actual 0
Jan 21 22:44:26 localhost systemd[1]: Started Security Auditing Service.
Jan 21 22:44:26 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 21 22:44:26 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 21 22:44:26 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 21 22:44:26 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 21 22:44:26 localhost systemd[1]: Starting Update is Completed...
Jan 21 22:44:26 localhost systemd[1]: Finished Update is Completed.
Jan 21 22:44:26 localhost systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Jan 21 22:44:26 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 21 22:44:26 localhost systemd[1]: Reached target System Initialization.
Jan 21 22:44:26 localhost systemd[1]: Started dnf makecache --timer.
Jan 21 22:44:26 localhost systemd[1]: Started Daily rotation of log files.
Jan 21 22:44:26 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 21 22:44:26 localhost systemd[1]: Reached target Timer Units.
Jan 21 22:44:26 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 21 22:44:26 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 21 22:44:26 localhost systemd[1]: Reached target Socket Units.
Jan 21 22:44:26 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 21 22:44:26 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 21 22:44:26 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 21 22:44:26 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 21 22:44:26 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 21 22:44:26 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 21 22:44:26 localhost systemd-udevd[741]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 22:44:26 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 21 22:44:26 localhost systemd[1]: Reached target Basic System.
Jan 21 22:44:26 localhost dbus-broker-lau[768]: Ready
Jan 21 22:44:26 localhost systemd[1]: Starting NTP client/server...
Jan 21 22:44:26 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 21 22:44:26 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 21 22:44:26 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 21 22:44:26 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 21 22:44:26 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 21 22:44:26 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 21 22:44:26 localhost systemd[1]: Started irqbalance daemon.
Jan 21 22:44:26 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 21 22:44:26 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 22:44:26 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 22:44:26 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 22:44:26 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 21 22:44:26 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 21 22:44:26 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 21 22:44:26 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 21 22:44:26 localhost systemd[1]: Starting User Login Management...
Jan 21 22:44:26 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 21 22:44:26 localhost chronyd[795]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 21 22:44:26 localhost chronyd[795]: Loaded 0 symmetric keys
Jan 21 22:44:26 localhost chronyd[795]: Using right/UTC timezone to obtain leap second data
Jan 21 22:44:26 localhost chronyd[795]: Loaded seccomp filter (level 2)
Jan 21 22:44:26 localhost systemd[1]: Started NTP client/server.
Jan 21 22:44:26 localhost systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 21 22:44:26 localhost systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 21 22:44:26 localhost systemd-logind[786]: New seat seat0.
Jan 21 22:44:26 localhost systemd[1]: Started User Login Management.
Jan 21 22:44:26 localhost kernel: kvm_amd: TSC scaling supported
Jan 21 22:44:26 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 21 22:44:26 localhost kernel: kvm_amd: Nested Paging enabled
Jan 21 22:44:26 localhost kernel: kvm_amd: LBR virtualization supported
Jan 21 22:44:26 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 21 22:44:26 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 21 22:44:26 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 21 22:44:26 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 21 22:44:26 localhost kernel: Console: switching to colour dummy device 80x25
Jan 21 22:44:26 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 21 22:44:26 localhost kernel: [drm] features: -context_init
Jan 21 22:44:26 localhost kernel: [drm] number of scanouts: 1
Jan 21 22:44:26 localhost kernel: [drm] number of cap sets: 0
Jan 21 22:44:26 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 21 22:44:26 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 21 22:44:26 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 21 22:44:26 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 21 22:44:26 localhost iptables.init[781]: iptables: Applying firewall rules: [  OK  ]
Jan 21 22:44:26 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 21 22:44:27 localhost cloud-init[841]: Cloud-init v. 24.4-8.el9 running 'init-local' at Wed, 21 Jan 2026 22:44:27 +0000. Up 6.73 seconds.
Jan 21 22:44:27 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 21 22:44:27 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 21 22:44:27 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp2ak53n6p.mount: Deactivated successfully.
Jan 21 22:44:27 localhost systemd[1]: Starting Hostname Service...
Jan 21 22:44:27 localhost systemd[1]: Started Hostname Service.
Jan 21 22:44:27 np0005591288.novalocal systemd-hostnamed[855]: Hostname set to <np0005591288.novalocal> (static)
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: Reached target Preparation for Network.
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: Starting Network Manager...
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.5786] NetworkManager (version 1.54.3-2.el9) is starting... (boot:52b6d350-1eb8-4a17-b2d9-800512411866)
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.5790] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.5866] manager[0x55a8b91c0000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.5920] hostname: hostname: using hostnamed
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.5920] hostname: static hostname changed from (none) to "np0005591288.novalocal"
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.5937] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6066] manager[0x55a8b91c0000]: rfkill: Wi-Fi hardware radio set enabled
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6067] manager[0x55a8b91c0000]: rfkill: WWAN hardware radio set enabled
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6109] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6110] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6110] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6111] manager: Networking is enabled by state file
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6113] settings: Loaded settings plugin: keyfile (internal)
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6154] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6177] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6189] dhcp: init: Using DHCP client 'internal'
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6191] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6206] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6214] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6223] device (lo): Activation: starting connection 'lo' (b77a1b8c-e360-4dc5-8be9-c999c9100350)
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6233] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6237] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6293] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6298] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6300] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6302] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6305] device (eth0): carrier: link connected
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6309] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6318] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6324] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6331] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6332] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6334] manager: NetworkManager state is now CONNECTING
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6339] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6346] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6349] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: Started Network Manager.
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6403] dhcp4 (eth0): state changed new lease, address=38.102.83.227
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: Reached target Network.
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6412] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6454] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6595] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6599] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6600] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6607] device (lo): Activation: successful, device activated.
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6621] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6624] manager: NetworkManager state is now CONNECTED_SITE
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6626] device (eth0): Activation: successful, device activated.
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6631] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 21 22:44:27 np0005591288.novalocal NetworkManager[859]: <info>  [1769035467.6633] manager: startup complete
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: Reached target NFS client services.
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: Reached target Remote File Systems.
Jan 21 22:44:27 np0005591288.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 21 22:44:27 np0005591288.novalocal cloud-init[923]: Cloud-init v. 24.4-8.el9 running 'init' at Wed, 21 Jan 2026 22:44:27 +0000. Up 7.69 seconds.
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: |  eth0  | True |        38.102.83.227         | 255.255.255.0 | global | fa:16:3e:fd:fe:e3 |
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: |  eth0  | True | fe80::f816:3eff:fefd:fee3/64 |       .       |  link  | fa:16:3e:fd:fe:e3 |
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 21 22:44:28 np0005591288.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 21 22:44:28 np0005591288.novalocal useradd[989]: new group: name=cloud-user, GID=1001
Jan 21 22:44:28 np0005591288.novalocal useradd[989]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 21 22:44:28 np0005591288.novalocal useradd[989]: add 'cloud-user' to group 'adm'
Jan 21 22:44:28 np0005591288.novalocal useradd[989]: add 'cloud-user' to group 'systemd-journal'
Jan 21 22:44:28 np0005591288.novalocal useradd[989]: add 'cloud-user' to shadow group 'adm'
Jan 21 22:44:28 np0005591288.novalocal useradd[989]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: Generating public/private rsa key pair.
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: The key fingerprint is:
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: SHA256:aVcXDhi/yXAftKG2tMN7HtXigtcpAWVQRvPzDn9c/HY root@np0005591288.novalocal
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: The key's randomart image is:
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: +---[RSA 3072]----+
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |          o*O +  |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |          .= B + |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |          o B B  |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |         . X B =.|
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |        S . X + *|
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |       . . . * Oo|
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |          . = * E|
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |           . = oo|
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |              .  |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: +----[SHA256]-----+
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: Generating public/private ecdsa key pair.
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: The key fingerprint is:
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: SHA256:FFgQQHGBHWoYHrqxMEPIVkGFB6K2FReA5hg048Vvbig root@np0005591288.novalocal
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: The key's randomart image is:
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: +---[ECDSA 256]---+
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |+*BXXOBBo        |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |*B=*++o  .       |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |%+o.=   .        |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |+Bo. o .         |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |o.  +   S        |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: | E . o           |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |  . .            |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |                 |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |                 |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: +----[SHA256]-----+
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: Generating public/private ed25519 key pair.
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: The key fingerprint is:
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: SHA256:Hk0His2P4R3JYrpihZEU4rEDq7AUuVwN8J5yM9BKVtc root@np0005591288.novalocal
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: The key's randomart image is:
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: +--[ED25519 256]--+
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |.o=o=o.   .      |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: | ==*.o E o o     |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |+=*oo . B = .    |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |*++..o + O o     |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |oo *. o S +      |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |  o o. o .       |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |    o . .        |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |   . .           |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: |                 |
Jan 21 22:44:29 np0005591288.novalocal cloud-init[923]: +----[SHA256]-----+
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Reached target Network is Online.
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Starting System Logging Service...
Jan 21 22:44:29 np0005591288.novalocal sm-notify[1005]: Version 2.5.4 starting
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Starting Permit User Sessions...
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Finished Permit User Sessions.
Jan 21 22:44:29 np0005591288.novalocal sshd[1007]: Server listening on 0.0.0.0 port 22.
Jan 21 22:44:29 np0005591288.novalocal sshd[1007]: Server listening on :: port 22.
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Started Command Scheduler.
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Started Getty on tty1.
Jan 21 22:44:29 np0005591288.novalocal crond[1010]: (CRON) STARTUP (1.5.7)
Jan 21 22:44:29 np0005591288.novalocal crond[1010]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 21 22:44:29 np0005591288.novalocal crond[1010]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 63% if used.)
Jan 21 22:44:29 np0005591288.novalocal crond[1010]: (CRON) INFO (running with inotify support)
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Reached target Login Prompts.
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 21 22:44:29 np0005591288.novalocal rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Jan 21 22:44:29 np0005591288.novalocal rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Started System Logging Service.
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Reached target Multi-User System.
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 21 22:44:29 np0005591288.novalocal rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 22:44:29 np0005591288.novalocal kdumpctl[1016]: kdump: No kdump initial ramdisk found.
Jan 21 22:44:29 np0005591288.novalocal kdumpctl[1016]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 21 22:44:29 np0005591288.novalocal cloud-init[1098]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Wed, 21 Jan 2026 22:44:29 +0000. Up 9.35 seconds.
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 21 22:44:29 np0005591288.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 21 22:44:30 np0005591288.novalocal cloud-init[1268]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Wed, 21 Jan 2026 22:44:30 +0000. Up 9.74 seconds.
Jan 21 22:44:30 np0005591288.novalocal dracut[1270]: dracut-057-102.git20250818.el9
Jan 21 22:44:30 np0005591288.novalocal cloud-init[1287]: #############################################################
Jan 21 22:44:30 np0005591288.novalocal cloud-init[1288]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 21 22:44:30 np0005591288.novalocal cloud-init[1290]: 256 SHA256:FFgQQHGBHWoYHrqxMEPIVkGFB6K2FReA5hg048Vvbig root@np0005591288.novalocal (ECDSA)
Jan 21 22:44:30 np0005591288.novalocal cloud-init[1292]: 256 SHA256:Hk0His2P4R3JYrpihZEU4rEDq7AUuVwN8J5yM9BKVtc root@np0005591288.novalocal (ED25519)
Jan 21 22:44:30 np0005591288.novalocal cloud-init[1294]: 3072 SHA256:aVcXDhi/yXAftKG2tMN7HtXigtcpAWVQRvPzDn9c/HY root@np0005591288.novalocal (RSA)
Jan 21 22:44:30 np0005591288.novalocal cloud-init[1295]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 21 22:44:30 np0005591288.novalocal cloud-init[1296]: #############################################################
Jan 21 22:44:30 np0005591288.novalocal cloud-init[1268]: Cloud-init v. 24.4-8.el9 finished at Wed, 21 Jan 2026 22:44:30 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.97 seconds
Jan 21 22:44:30 np0005591288.novalocal dracut[1272]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 21 22:44:30 np0005591288.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 21 22:44:30 np0005591288.novalocal systemd[1]: Reached target Cloud-init target.
Jan 21 22:44:30 np0005591288.novalocal sshd-session[1379]: Connection closed by 38.102.83.114 port 45456 [preauth]
Jan 21 22:44:30 np0005591288.novalocal sshd-session[1384]: Unable to negotiate with 38.102.83.114 port 38452: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 21 22:44:30 np0005591288.novalocal sshd-session[1396]: Unable to negotiate with 38.102.83.114 port 38468: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 21 22:44:30 np0005591288.novalocal sshd-session[1402]: Unable to negotiate with 38.102.83.114 port 38476: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 21 22:44:30 np0005591288.novalocal sshd-session[1404]: Connection reset by 38.102.83.114 port 38484 [preauth]
Jan 21 22:44:30 np0005591288.novalocal sshd-session[1389]: Connection closed by 38.102.83.114 port 38460 [preauth]
Jan 21 22:44:30 np0005591288.novalocal sshd-session[1417]: Unable to negotiate with 38.102.83.114 port 38502: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 21 22:44:30 np0005591288.novalocal sshd-session[1422]: Unable to negotiate with 38.102.83.114 port 38506: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 21 22:44:30 np0005591288.novalocal dracut[1272]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 21 22:44:30 np0005591288.novalocal dracut[1272]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 21 22:44:30 np0005591288.novalocal dracut[1272]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 21 22:44:30 np0005591288.novalocal sshd-session[1412]: Connection closed by 38.102.83.114 port 38486 [preauth]
Jan 21 22:44:30 np0005591288.novalocal dracut[1272]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 21 22:44:30 np0005591288.novalocal dracut[1272]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 21 22:44:30 np0005591288.novalocal dracut[1272]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 21 22:44:30 np0005591288.novalocal dracut[1272]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 21 22:44:30 np0005591288.novalocal dracut[1272]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 21 22:44:30 np0005591288.novalocal dracut[1272]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 21 22:44:30 np0005591288.novalocal dracut[1272]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: memstrack is not available
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: memstrack is not available
Jan 21 22:44:31 np0005591288.novalocal dracut[1272]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 21 22:44:32 np0005591288.novalocal dracut[1272]: *** Including module: systemd ***
Jan 21 22:44:32 np0005591288.novalocal dracut[1272]: *** Including module: fips ***
Jan 21 22:44:32 np0005591288.novalocal chronyd[795]: Selected source 167.160.187.179 (2.centos.pool.ntp.org)
Jan 21 22:44:32 np0005591288.novalocal chronyd[795]: System clock TAI offset set to 37 seconds
Jan 21 22:44:32 np0005591288.novalocal dracut[1272]: *** Including module: systemd-initrd ***
Jan 21 22:44:32 np0005591288.novalocal dracut[1272]: *** Including module: i18n ***
Jan 21 22:44:33 np0005591288.novalocal dracut[1272]: *** Including module: drm ***
Jan 21 22:44:33 np0005591288.novalocal dracut[1272]: *** Including module: prefixdevname ***
Jan 21 22:44:33 np0005591288.novalocal dracut[1272]: *** Including module: kernel-modules ***
Jan 21 22:44:33 np0005591288.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 21 22:44:34 np0005591288.novalocal chronyd[795]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Jan 21 22:44:34 np0005591288.novalocal dracut[1272]: *** Including module: kernel-modules-extra ***
Jan 21 22:44:34 np0005591288.novalocal dracut[1272]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 21 22:44:34 np0005591288.novalocal dracut[1272]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 21 22:44:34 np0005591288.novalocal dracut[1272]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 21 22:44:34 np0005591288.novalocal dracut[1272]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 21 22:44:34 np0005591288.novalocal dracut[1272]: *** Including module: qemu ***
Jan 21 22:44:34 np0005591288.novalocal dracut[1272]: *** Including module: fstab-sys ***
Jan 21 22:44:34 np0005591288.novalocal dracut[1272]: *** Including module: rootfs-block ***
Jan 21 22:44:34 np0005591288.novalocal dracut[1272]: *** Including module: terminfo ***
Jan 21 22:44:34 np0005591288.novalocal dracut[1272]: *** Including module: udev-rules ***
Jan 21 22:44:35 np0005591288.novalocal dracut[1272]: Skipping udev rule: 91-permissions.rules
Jan 21 22:44:35 np0005591288.novalocal dracut[1272]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 21 22:44:35 np0005591288.novalocal dracut[1272]: *** Including module: virtiofs ***
Jan 21 22:44:35 np0005591288.novalocal dracut[1272]: *** Including module: dracut-systemd ***
Jan 21 22:44:35 np0005591288.novalocal dracut[1272]: *** Including module: usrmount ***
Jan 21 22:44:35 np0005591288.novalocal dracut[1272]: *** Including module: base ***
Jan 21 22:44:35 np0005591288.novalocal dracut[1272]: *** Including module: fs-lib ***
Jan 21 22:44:35 np0005591288.novalocal dracut[1272]: *** Including module: kdumpbase ***
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:   microcode_ctl module: mangling fw_dir
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: configuration "intel" is ignored
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]: *** Including module: openssl ***
Jan 21 22:44:36 np0005591288.novalocal dracut[1272]: *** Including module: shutdown ***
Jan 21 22:44:37 np0005591288.novalocal dracut[1272]: *** Including module: squash ***
Jan 21 22:44:37 np0005591288.novalocal dracut[1272]: *** Including modules done ***
Jan 21 22:44:37 np0005591288.novalocal dracut[1272]: *** Installing kernel module dependencies ***
Jan 21 22:44:37 np0005591288.novalocal irqbalance[782]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 21 22:44:37 np0005591288.novalocal irqbalance[782]: IRQ 25 affinity is now unmanaged
Jan 21 22:44:37 np0005591288.novalocal irqbalance[782]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 21 22:44:37 np0005591288.novalocal irqbalance[782]: IRQ 31 affinity is now unmanaged
Jan 21 22:44:37 np0005591288.novalocal irqbalance[782]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 21 22:44:37 np0005591288.novalocal irqbalance[782]: IRQ 28 affinity is now unmanaged
Jan 21 22:44:37 np0005591288.novalocal irqbalance[782]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 21 22:44:37 np0005591288.novalocal irqbalance[782]: IRQ 32 affinity is now unmanaged
Jan 21 22:44:37 np0005591288.novalocal irqbalance[782]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 21 22:44:37 np0005591288.novalocal irqbalance[782]: IRQ 30 affinity is now unmanaged
Jan 21 22:44:37 np0005591288.novalocal irqbalance[782]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 21 22:44:37 np0005591288.novalocal irqbalance[782]: IRQ 29 affinity is now unmanaged
Jan 21 22:44:37 np0005591288.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 22:44:38 np0005591288.novalocal dracut[1272]: *** Installing kernel module dependencies done ***
Jan 21 22:44:38 np0005591288.novalocal dracut[1272]: *** Resolving executable dependencies ***
Jan 21 22:44:39 np0005591288.novalocal dracut[1272]: *** Resolving executable dependencies done ***
Jan 21 22:44:39 np0005591288.novalocal dracut[1272]: *** Generating early-microcode cpio image ***
Jan 21 22:44:39 np0005591288.novalocal dracut[1272]: *** Store current command line parameters ***
Jan 21 22:44:39 np0005591288.novalocal dracut[1272]: Stored kernel commandline:
Jan 21 22:44:39 np0005591288.novalocal dracut[1272]: No dracut internal kernel commandline stored in the initramfs
Jan 21 22:44:39 np0005591288.novalocal dracut[1272]: *** Install squash loader ***
Jan 21 22:44:40 np0005591288.novalocal dracut[1272]: *** Squashing the files inside the initramfs ***
Jan 21 22:44:41 np0005591288.novalocal dracut[1272]: *** Squashing the files inside the initramfs done ***
Jan 21 22:44:41 np0005591288.novalocal dracut[1272]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 21 22:44:41 np0005591288.novalocal dracut[1272]: *** Hardlinking files ***
Jan 21 22:44:41 np0005591288.novalocal dracut[1272]: Mode:           real
Jan 21 22:44:41 np0005591288.novalocal dracut[1272]: Files:          50
Jan 21 22:44:41 np0005591288.novalocal dracut[1272]: Linked:         0 files
Jan 21 22:44:41 np0005591288.novalocal dracut[1272]: Compared:       0 xattrs
Jan 21 22:44:41 np0005591288.novalocal dracut[1272]: Compared:       0 files
Jan 21 22:44:41 np0005591288.novalocal dracut[1272]: Saved:          0 B
Jan 21 22:44:41 np0005591288.novalocal dracut[1272]: Duration:       0.000811 seconds
Jan 21 22:44:41 np0005591288.novalocal dracut[1272]: *** Hardlinking files done ***
Jan 21 22:44:42 np0005591288.novalocal dracut[1272]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 21 22:44:42 np0005591288.novalocal kdumpctl[1016]: kdump: kexec: loaded kdump kernel
Jan 21 22:44:42 np0005591288.novalocal kdumpctl[1016]: kdump: Starting kdump: [OK]
Jan 21 22:44:42 np0005591288.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 21 22:44:42 np0005591288.novalocal systemd[1]: Startup finished in 1.678s (kernel) + 2.791s (initrd) + 17.882s (userspace) = 22.351s.
Jan 21 22:44:46 np0005591288.novalocal sshd-session[4304]: Accepted publickey for zuul from 38.102.83.114 port 35020 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 21 22:44:46 np0005591288.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 21 22:44:46 np0005591288.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 21 22:44:46 np0005591288.novalocal systemd-logind[786]: New session 1 of user zuul.
Jan 21 22:44:46 np0005591288.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 21 22:44:46 np0005591288.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 21 22:44:46 np0005591288.novalocal systemd[4308]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 22:44:46 np0005591288.novalocal systemd[4308]: Queued start job for default target Main User Target.
Jan 21 22:44:46 np0005591288.novalocal systemd[4308]: Created slice User Application Slice.
Jan 21 22:44:46 np0005591288.novalocal systemd[4308]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 21 22:44:46 np0005591288.novalocal systemd[4308]: Started Daily Cleanup of User's Temporary Directories.
Jan 21 22:44:46 np0005591288.novalocal systemd[4308]: Reached target Paths.
Jan 21 22:44:46 np0005591288.novalocal systemd[4308]: Reached target Timers.
Jan 21 22:44:46 np0005591288.novalocal systemd[4308]: Starting D-Bus User Message Bus Socket...
Jan 21 22:44:46 np0005591288.novalocal systemd[4308]: Starting Create User's Volatile Files and Directories...
Jan 21 22:44:46 np0005591288.novalocal systemd[4308]: Finished Create User's Volatile Files and Directories.
Jan 21 22:44:46 np0005591288.novalocal systemd[4308]: Listening on D-Bus User Message Bus Socket.
Jan 21 22:44:46 np0005591288.novalocal systemd[4308]: Reached target Sockets.
Jan 21 22:44:46 np0005591288.novalocal systemd[4308]: Reached target Basic System.
Jan 21 22:44:46 np0005591288.novalocal systemd[4308]: Reached target Main User Target.
Jan 21 22:44:46 np0005591288.novalocal systemd[4308]: Startup finished in 127ms.
Jan 21 22:44:46 np0005591288.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 21 22:44:46 np0005591288.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 21 22:44:46 np0005591288.novalocal sshd-session[4304]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 22:44:47 np0005591288.novalocal python3[4390]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 22:44:49 np0005591288.novalocal python3[4418]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 22:44:57 np0005591288.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 21 22:44:57 np0005591288.novalocal python3[4478]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 22:44:58 np0005591288.novalocal python3[4518]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 21 22:45:00 np0005591288.novalocal python3[4544]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/6G0PTw8MERqfykjKC/MunKT5Omf5DqXYX/CgSydqrRegVRqrDSIXGetVPcjC0QG4Vfp21H+nGS/mFDaGwkzGJ4gp4mlpjoF3fRBc+CkmILs3i1Tm4R9i4CsL/xKbUG3/NnihpNaWrAHhT/6UMuWR7nWbZD8DqVVzJ76VDW9NkRMyBPGXn8jrC+5Z0sZitl9AEo2xNODVmRczqm/zFS+brbgmdLVlhjCk7Wa6iGzWI1nQ7hmrsAI4ufvXkeChvSHHTJOyDBHym75cDKgNPOm5nFZALgPbxhun0I2+7W6niOzhVY/uajpJ1fkzuvu69e1TZTfbwuaH1Om4M42ngsUrhzopc+Tsr2U+iQFmg0eLnp3A8ZNVfJTrYMSW4Wmi9BvQQuK+CHUyAW2u2A8eB0c0I+axJFYSAARcP7eiwOZwwxlL6TfgjryZA85QBNI4Nf9jb+RLuD0ST2OVX2WuqdH0h6XYEtirdZeIuPq9VN1DVlr7iTXydEySJ2DqAObJ18U= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:01 np0005591288.novalocal python3[4568]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:01 np0005591288.novalocal python3[4667]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 22:45:02 np0005591288.novalocal python3[4738]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769035501.4777575-251-76380642343512/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=712c601c7a17485589ea2f3ca5d142f6_id_rsa follow=False checksum=9e0618e4e91300a4df1c304bf071e2c90bea004f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:03 np0005591288.novalocal python3[4861]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 22:45:03 np0005591288.novalocal python3[4932]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769035502.7438288-306-28145396920584/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=712c601c7a17485589ea2f3ca5d142f6_id_rsa.pub follow=False checksum=fd7c009312ab1e68f2f2a411c3c22b9a2d672703 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:04 np0005591288.novalocal python3[4980]: ansible-ping Invoked with data=pong
Jan 21 22:45:05 np0005591288.novalocal python3[5004]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 22:45:08 np0005591288.novalocal python3[5062]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 21 22:45:09 np0005591288.novalocal python3[5094]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:09 np0005591288.novalocal python3[5118]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:09 np0005591288.novalocal python3[5142]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:10 np0005591288.novalocal python3[5166]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:10 np0005591288.novalocal python3[5190]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:10 np0005591288.novalocal python3[5214]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:12 np0005591288.novalocal sudo[5238]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmsbxyxndodhallmlfdxqmdunceimksf ; /usr/bin/python3'
Jan 21 22:45:12 np0005591288.novalocal sudo[5238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:45:12 np0005591288.novalocal python3[5240]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:12 np0005591288.novalocal sudo[5238]: pam_unix(sudo:session): session closed for user root
Jan 21 22:45:12 np0005591288.novalocal sudo[5316]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcbinwprvftnejlasvcvlctgbkqdvlux ; /usr/bin/python3'
Jan 21 22:45:12 np0005591288.novalocal sudo[5316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:45:13 np0005591288.novalocal python3[5318]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 22:45:13 np0005591288.novalocal sudo[5316]: pam_unix(sudo:session): session closed for user root
Jan 21 22:45:13 np0005591288.novalocal sudo[5389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueajpqsaxwqszewpfmnzynnbhugsdefa ; /usr/bin/python3'
Jan 21 22:45:13 np0005591288.novalocal sudo[5389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:45:13 np0005591288.novalocal python3[5391]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769035512.6462967-31-18458748763448/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:13 np0005591288.novalocal sudo[5389]: pam_unix(sudo:session): session closed for user root
Jan 21 22:45:14 np0005591288.novalocal python3[5439]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:14 np0005591288.novalocal python3[5463]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:14 np0005591288.novalocal python3[5487]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:15 np0005591288.novalocal python3[5511]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:15 np0005591288.novalocal python3[5535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:15 np0005591288.novalocal python3[5559]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:15 np0005591288.novalocal python3[5583]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:16 np0005591288.novalocal python3[5607]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:16 np0005591288.novalocal python3[5631]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:16 np0005591288.novalocal python3[5655]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:17 np0005591288.novalocal python3[5679]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:17 np0005591288.novalocal python3[5703]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:17 np0005591288.novalocal python3[5727]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:17 np0005591288.novalocal python3[5751]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:18 np0005591288.novalocal python3[5775]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:18 np0005591288.novalocal python3[5799]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:18 np0005591288.novalocal python3[5823]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:19 np0005591288.novalocal python3[5847]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:19 np0005591288.novalocal python3[5871]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:19 np0005591288.novalocal python3[5895]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:20 np0005591288.novalocal python3[5919]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:20 np0005591288.novalocal python3[5943]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:20 np0005591288.novalocal python3[5967]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:20 np0005591288.novalocal python3[5991]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:21 np0005591288.novalocal python3[6015]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:21 np0005591288.novalocal python3[6039]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:45:23 np0005591288.novalocal sudo[6063]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvmczqhpxrnnqurdevphlemqcioylran ; /usr/bin/python3'
Jan 21 22:45:23 np0005591288.novalocal sudo[6063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:45:23 np0005591288.novalocal python3[6065]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 21 22:45:23 np0005591288.novalocal systemd[1]: Starting Time & Date Service...
Jan 21 22:45:23 np0005591288.novalocal systemd[1]: Started Time & Date Service.
Jan 21 22:45:23 np0005591288.novalocal systemd-timedated[6067]: Changed time zone to 'UTC' (UTC).
Jan 21 22:45:24 np0005591288.novalocal sudo[6063]: pam_unix(sudo:session): session closed for user root
Jan 21 22:45:24 np0005591288.novalocal sudo[6094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppjcqgzndykwvbzbcupzpljkrarbjvjl ; /usr/bin/python3'
Jan 21 22:45:24 np0005591288.novalocal sudo[6094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:45:24 np0005591288.novalocal python3[6096]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:24 np0005591288.novalocal sudo[6094]: pam_unix(sudo:session): session closed for user root
Jan 21 22:45:24 np0005591288.novalocal python3[6172]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 22:45:25 np0005591288.novalocal python3[6243]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769035524.6528873-251-20707754274229/source _original_basename=tmps84hbs10 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:25 np0005591288.novalocal python3[6343]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 22:45:26 np0005591288.novalocal python3[6414]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769035525.5918045-301-165543407140346/source _original_basename=tmpnvkdcocb follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:27 np0005591288.novalocal sudo[6514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqdmvmpmbgbvmziihajikuhimbiqkhyp ; /usr/bin/python3'
Jan 21 22:45:27 np0005591288.novalocal sudo[6514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:45:27 np0005591288.novalocal python3[6516]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 22:45:27 np0005591288.novalocal sudo[6514]: pam_unix(sudo:session): session closed for user root
Jan 21 22:45:27 np0005591288.novalocal sudo[6587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bscxybpfpdmlyzvpbgbruvtfjfhhiqfk ; /usr/bin/python3'
Jan 21 22:45:27 np0005591288.novalocal sudo[6587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:45:27 np0005591288.novalocal python3[6589]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769035526.8827333-381-193382945232044/source _original_basename=tmp1mkwtouc follow=False checksum=3c9e47928860e7b69943a96b1d9d3969aefe1031 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:27 np0005591288.novalocal sudo[6587]: pam_unix(sudo:session): session closed for user root
Jan 21 22:45:28 np0005591288.novalocal python3[6637]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 22:45:28 np0005591288.novalocal python3[6663]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 22:45:28 np0005591288.novalocal sudo[6741]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jconksfaxwndxyvfvzwsvxnmtnioctie ; /usr/bin/python3'
Jan 21 22:45:28 np0005591288.novalocal sudo[6741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:45:28 np0005591288.novalocal python3[6743]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 22:45:28 np0005591288.novalocal sudo[6741]: pam_unix(sudo:session): session closed for user root
Jan 21 22:45:29 np0005591288.novalocal sudo[6814]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojgwvvxnhmgjkjwksmgwlfursgpscrcc ; /usr/bin/python3'
Jan 21 22:45:29 np0005591288.novalocal sudo[6814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:45:29 np0005591288.novalocal python3[6816]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769035528.5805335-451-125876748918305/source _original_basename=tmphyqbd04a follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:29 np0005591288.novalocal sudo[6814]: pam_unix(sudo:session): session closed for user root
Jan 21 22:45:29 np0005591288.novalocal sudo[6865]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyshpqnetndjvjuvbugcweherobqnkez ; /usr/bin/python3'
Jan 21 22:45:29 np0005591288.novalocal sudo[6865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:45:30 np0005591288.novalocal python3[6867]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-1d15-f1bc-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 22:45:30 np0005591288.novalocal sudo[6865]: pam_unix(sudo:session): session closed for user root
Jan 21 22:45:30 np0005591288.novalocal python3[6895]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-1d15-f1bc-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 21 22:45:32 np0005591288.novalocal python3[6923]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:38 np0005591288.novalocal chronyd[795]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Jan 21 22:45:50 np0005591288.novalocal sudo[6947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cozejghybevnrasvwddvktjwyabpchse ; /usr/bin/python3'
Jan 21 22:45:50 np0005591288.novalocal sudo[6947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:45:50 np0005591288.novalocal python3[6949]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:45:50 np0005591288.novalocal sudo[6947]: pam_unix(sudo:session): session closed for user root
Jan 21 22:45:54 np0005591288.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 21 22:46:34 np0005591288.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 21 22:46:34 np0005591288.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 21 22:46:34 np0005591288.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 21 22:46:34 np0005591288.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 21 22:46:34 np0005591288.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 21 22:46:34 np0005591288.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 21 22:46:34 np0005591288.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 21 22:46:34 np0005591288.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 21 22:46:34 np0005591288.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 21 22:46:34 np0005591288.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 21 22:46:34 np0005591288.novalocal NetworkManager[859]: <info>  [1769035594.6203] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 21 22:46:34 np0005591288.novalocal systemd-udevd[6952]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 22:46:34 np0005591288.novalocal NetworkManager[859]: <info>  [1769035594.6406] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 22:46:34 np0005591288.novalocal NetworkManager[859]: <info>  [1769035594.6430] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 21 22:46:34 np0005591288.novalocal NetworkManager[859]: <info>  [1769035594.6433] device (eth1): carrier: link connected
Jan 21 22:46:34 np0005591288.novalocal NetworkManager[859]: <info>  [1769035594.6434] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 21 22:46:34 np0005591288.novalocal NetworkManager[859]: <info>  [1769035594.6440] policy: auto-activating connection 'Wired connection 1' (8b2b191b-f4f0-3a8f-bed2-162c0f2abdba)
Jan 21 22:46:34 np0005591288.novalocal NetworkManager[859]: <info>  [1769035594.6444] device (eth1): Activation: starting connection 'Wired connection 1' (8b2b191b-f4f0-3a8f-bed2-162c0f2abdba)
Jan 21 22:46:34 np0005591288.novalocal NetworkManager[859]: <info>  [1769035594.6445] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 22:46:34 np0005591288.novalocal NetworkManager[859]: <info>  [1769035594.6447] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 22:46:34 np0005591288.novalocal NetworkManager[859]: <info>  [1769035594.6451] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 22:46:34 np0005591288.novalocal NetworkManager[859]: <info>  [1769035594.6455] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 21 22:46:35 np0005591288.novalocal python3[6979]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-8aa6-eb9d-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 22:46:45 np0005591288.novalocal sudo[7057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrpefuobwfkmfubbawcnqhxeylaslqvw ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 21 22:46:45 np0005591288.novalocal sudo[7057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:46:45 np0005591288.novalocal python3[7059]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 22:46:45 np0005591288.novalocal sudo[7057]: pam_unix(sudo:session): session closed for user root
Jan 21 22:46:45 np0005591288.novalocal sudo[7130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktabkwsjlbpfgijqiufyawbupzkwbvsm ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 21 22:46:45 np0005591288.novalocal sudo[7130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:46:45 np0005591288.novalocal python3[7132]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769035605.1737943-104-177681450999321/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=ba0d03f8e2cdda8751680237b083d13411fa526d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:46:45 np0005591288.novalocal sudo[7130]: pam_unix(sudo:session): session closed for user root
Jan 21 22:46:46 np0005591288.novalocal sudo[7180]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czgocylwzmouzbhgbulsresarnxzgfmg ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 21 22:46:46 np0005591288.novalocal sudo[7180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:46:46 np0005591288.novalocal python3[7182]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 22:46:46 np0005591288.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 21 22:46:46 np0005591288.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 21 22:46:46 np0005591288.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 21 22:46:46 np0005591288.novalocal systemd[1]: Stopping Network Manager...
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[859]: <info>  [1769035606.6646] caught SIGTERM, shutting down normally.
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[859]: <info>  [1769035606.6654] dhcp4 (eth0): canceled DHCP transaction
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[859]: <info>  [1769035606.6654] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[859]: <info>  [1769035606.6654] dhcp4 (eth0): state changed no lease
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[859]: <info>  [1769035606.6656] manager: NetworkManager state is now CONNECTING
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[859]: <info>  [1769035606.6756] dhcp4 (eth1): canceled DHCP transaction
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[859]: <info>  [1769035606.6756] dhcp4 (eth1): state changed no lease
Jan 21 22:46:46 np0005591288.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[859]: <info>  [1769035606.6810] exiting (success)
Jan 21 22:46:46 np0005591288.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 22:46:46 np0005591288.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 21 22:46:46 np0005591288.novalocal systemd[1]: Stopped Network Manager.
Jan 21 22:46:46 np0005591288.novalocal systemd[1]: NetworkManager.service: Consumed 1.068s CPU time, 10.2M memory peak.
Jan 21 22:46:46 np0005591288.novalocal systemd[1]: Starting Network Manager...
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.7311] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:52b6d350-1eb8-4a17-b2d9-800512411866)
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.7315] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.7380] manager[0x560efb41b000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 21 22:46:46 np0005591288.novalocal systemd[1]: Starting Hostname Service...
Jan 21 22:46:46 np0005591288.novalocal systemd[1]: Started Hostname Service.
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8134] hostname: hostname: using hostnamed
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8137] hostname: static hostname changed from (none) to "np0005591288.novalocal"
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8146] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8152] manager[0x560efb41b000]: rfkill: Wi-Fi hardware radio set enabled
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8153] manager[0x560efb41b000]: rfkill: WWAN hardware radio set enabled
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8199] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8200] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8201] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8202] manager: Networking is enabled by state file
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8206] settings: Loaded settings plugin: keyfile (internal)
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8212] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8256] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8272] dhcp: init: Using DHCP client 'internal'
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8276] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8284] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8292] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8304] device (lo): Activation: starting connection 'lo' (b77a1b8c-e360-4dc5-8be9-c999c9100350)
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8314] device (eth0): carrier: link connected
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8323] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8332] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8334] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8344] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8357] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8370] device (eth1): carrier: link connected
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8378] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8387] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (8b2b191b-f4f0-3a8f-bed2-162c0f2abdba) (indicated)
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8388] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8397] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8411] device (eth1): Activation: starting connection 'Wired connection 1' (8b2b191b-f4f0-3a8f-bed2-162c0f2abdba)
Jan 21 22:46:46 np0005591288.novalocal systemd[1]: Started Network Manager.
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8420] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8426] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8429] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8431] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8434] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8448] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8452] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8454] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8457] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8463] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8467] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8478] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8482] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8501] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8502] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8506] device (lo): Activation: successful, device activated.
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8512] dhcp4 (eth0): state changed new lease, address=38.102.83.227
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8517] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8573] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 21 22:46:46 np0005591288.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8618] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8620] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8623] manager: NetworkManager state is now CONNECTED_SITE
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8626] device (eth0): Activation: successful, device activated.
Jan 21 22:46:46 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035606.8631] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 21 22:46:46 np0005591288.novalocal sudo[7180]: pam_unix(sudo:session): session closed for user root
Jan 21 22:46:47 np0005591288.novalocal python3[7266]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-8aa6-eb9d-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 22:46:56 np0005591288.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 22:47:02 np0005591288.novalocal systemd[4308]: Starting Mark boot as successful...
Jan 21 22:47:02 np0005591288.novalocal systemd[4308]: Finished Mark boot as successful.
Jan 21 22:47:16 np0005591288.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.2723] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 21 22:47:32 np0005591288.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 22:47:32 np0005591288.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.3041] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.3044] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.3050] device (eth1): Activation: successful, device activated.
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.3056] manager: startup complete
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.3058] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <warn>  [1769035652.3062] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.3068] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 21 22:47:32 np0005591288.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.3277] dhcp4 (eth1): canceled DHCP transaction
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.3278] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.3278] dhcp4 (eth1): state changed no lease
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.3301] policy: auto-activating connection 'ci-private-network' (33de29ae-c5cf-5966-ab7d-58d01d107e18)
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.3309] device (eth1): Activation: starting connection 'ci-private-network' (33de29ae-c5cf-5966-ab7d-58d01d107e18)
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.3311] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.3316] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.3328] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.3343] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.4141] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.4143] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 22:47:32 np0005591288.novalocal NetworkManager[7194]: <info>  [1769035652.4151] device (eth1): Activation: successful, device activated.
Jan 21 22:47:36 np0005591288.novalocal sshd-session[7295]: error: maximum authentication attempts exceeded for root from 112.119.212.162 port 60824 ssh2 [preauth]
Jan 21 22:47:36 np0005591288.novalocal sshd-session[7295]: Disconnecting authenticating user root 112.119.212.162 port 60824: Too many authentication failures [preauth]
Jan 21 22:47:40 np0005591288.novalocal sshd-session[7297]: error: maximum authentication attempts exceeded for root from 112.119.212.162 port 33080 ssh2 [preauth]
Jan 21 22:47:40 np0005591288.novalocal sshd-session[7297]: Disconnecting authenticating user root 112.119.212.162 port 33080: Too many authentication failures [preauth]
Jan 21 22:47:42 np0005591288.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 22:47:43 np0005591288.novalocal sshd-session[7299]: error: maximum authentication attempts exceeded for root from 112.119.212.162 port 33582 ssh2 [preauth]
Jan 21 22:47:43 np0005591288.novalocal sshd-session[7299]: Disconnecting authenticating user root 112.119.212.162 port 33582: Too many authentication failures [preauth]
Jan 21 22:47:46 np0005591288.novalocal sshd-session[7301]: Received disconnect from 112.119.212.162 port 33958:11: disconnected by user [preauth]
Jan 21 22:47:46 np0005591288.novalocal sshd-session[7301]: Disconnected from authenticating user root 112.119.212.162 port 33958 [preauth]
Jan 21 22:47:47 np0005591288.novalocal sshd-session[4317]: Received disconnect from 38.102.83.114 port 35020:11: disconnected by user
Jan 21 22:47:47 np0005591288.novalocal sshd-session[4317]: Disconnected from user zuul 38.102.83.114 port 35020
Jan 21 22:47:47 np0005591288.novalocal sshd-session[4304]: pam_unix(sshd:session): session closed for user zuul
Jan 21 22:47:47 np0005591288.novalocal systemd-logind[786]: Session 1 logged out. Waiting for processes to exit.
Jan 21 22:47:47 np0005591288.novalocal chronyd[795]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Jan 21 22:47:48 np0005591288.novalocal sshd-session[7303]: Invalid user admin from 112.119.212.162 port 34202
Jan 21 22:47:49 np0005591288.novalocal sshd-session[7303]: error: maximum authentication attempts exceeded for invalid user admin from 112.119.212.162 port 34202 ssh2 [preauth]
Jan 21 22:47:49 np0005591288.novalocal sshd-session[7303]: Disconnecting invalid user admin 112.119.212.162 port 34202: Too many authentication failures [preauth]
Jan 21 22:47:51 np0005591288.novalocal sshd-session[7305]: Invalid user admin from 112.119.212.162 port 34712
Jan 21 22:47:52 np0005591288.novalocal sshd-session[7305]: error: maximum authentication attempts exceeded for invalid user admin from 112.119.212.162 port 34712 ssh2 [preauth]
Jan 21 22:47:52 np0005591288.novalocal sshd-session[7305]: Disconnecting invalid user admin 112.119.212.162 port 34712: Too many authentication failures [preauth]
Jan 21 22:47:54 np0005591288.novalocal sshd-session[7307]: Invalid user admin from 112.119.212.162 port 35098
Jan 21 22:47:55 np0005591288.novalocal sshd-session[7307]: Received disconnect from 112.119.212.162 port 35098:11: disconnected by user [preauth]
Jan 21 22:47:55 np0005591288.novalocal sshd-session[7307]: Disconnected from invalid user admin 112.119.212.162 port 35098 [preauth]
Jan 21 22:47:57 np0005591288.novalocal sshd-session[7310]: Invalid user oracle from 112.119.212.162 port 35412
Jan 21 22:47:58 np0005591288.novalocal sshd-session[7310]: error: maximum authentication attempts exceeded for invalid user oracle from 112.119.212.162 port 35412 ssh2 [preauth]
Jan 21 22:47:58 np0005591288.novalocal sshd-session[7310]: Disconnecting invalid user oracle 112.119.212.162 port 35412: Too many authentication failures [preauth]
Jan 21 22:48:01 np0005591288.novalocal sshd-session[7312]: Invalid user oracle from 112.119.212.162 port 35868
Jan 21 22:48:02 np0005591288.novalocal sshd-session[7312]: error: maximum authentication attempts exceeded for invalid user oracle from 112.119.212.162 port 35868 ssh2 [preauth]
Jan 21 22:48:02 np0005591288.novalocal sshd-session[7312]: Disconnecting invalid user oracle 112.119.212.162 port 35868: Too many authentication failures [preauth]
Jan 21 22:48:04 np0005591288.novalocal sshd-session[7314]: Invalid user oracle from 112.119.212.162 port 36362
Jan 21 22:48:05 np0005591288.novalocal sshd-session[7314]: Received disconnect from 112.119.212.162 port 36362:11: disconnected by user [preauth]
Jan 21 22:48:05 np0005591288.novalocal sshd-session[7314]: Disconnected from invalid user oracle 112.119.212.162 port 36362 [preauth]
Jan 21 22:48:06 np0005591288.novalocal sshd-session[7316]: Invalid user usuario from 112.119.212.162 port 36690
Jan 21 22:48:08 np0005591288.novalocal sshd-session[7316]: error: maximum authentication attempts exceeded for invalid user usuario from 112.119.212.162 port 36690 ssh2 [preauth]
Jan 21 22:48:08 np0005591288.novalocal sshd-session[7316]: Disconnecting invalid user usuario 112.119.212.162 port 36690: Too many authentication failures [preauth]
Jan 21 22:48:10 np0005591288.novalocal sshd-session[7318]: Invalid user usuario from 112.119.212.162 port 37108
Jan 21 22:48:11 np0005591288.novalocal sshd-session[7318]: error: maximum authentication attempts exceeded for invalid user usuario from 112.119.212.162 port 37108 ssh2 [preauth]
Jan 21 22:48:11 np0005591288.novalocal sshd-session[7318]: Disconnecting invalid user usuario 112.119.212.162 port 37108: Too many authentication failures [preauth]
Jan 21 22:48:14 np0005591288.novalocal sshd-session[7320]: Invalid user usuario from 112.119.212.162 port 37578
Jan 21 22:48:15 np0005591288.novalocal sshd-session[7320]: Received disconnect from 112.119.212.162 port 37578:11: disconnected by user [preauth]
Jan 21 22:48:15 np0005591288.novalocal sshd-session[7320]: Disconnected from invalid user usuario 112.119.212.162 port 37578 [preauth]
Jan 21 22:48:17 np0005591288.novalocal sshd-session[7322]: Invalid user test from 112.119.212.162 port 38008
Jan 21 22:48:18 np0005591288.novalocal sshd-session[7322]: error: maximum authentication attempts exceeded for invalid user test from 112.119.212.162 port 38008 ssh2 [preauth]
Jan 21 22:48:18 np0005591288.novalocal sshd-session[7322]: Disconnecting invalid user test 112.119.212.162 port 38008: Too many authentication failures [preauth]
Jan 21 22:48:21 np0005591288.novalocal sshd-session[7324]: Invalid user test from 112.119.212.162 port 38476
Jan 21 22:48:23 np0005591288.novalocal sshd-session[7324]: error: maximum authentication attempts exceeded for invalid user test from 112.119.212.162 port 38476 ssh2 [preauth]
Jan 21 22:48:23 np0005591288.novalocal sshd-session[7324]: Disconnecting invalid user test 112.119.212.162 port 38476: Too many authentication failures [preauth]
Jan 21 22:48:25 np0005591288.novalocal sshd-session[7326]: Invalid user test from 112.119.212.162 port 39068
Jan 21 22:48:25 np0005591288.novalocal sshd-session[7326]: Received disconnect from 112.119.212.162 port 39068:11: disconnected by user [preauth]
Jan 21 22:48:25 np0005591288.novalocal sshd-session[7326]: Disconnected from invalid user test 112.119.212.162 port 39068 [preauth]
Jan 21 22:48:27 np0005591288.novalocal sshd-session[7328]: Invalid user user from 112.119.212.162 port 39412
Jan 21 22:48:28 np0005591288.novalocal sshd-session[7328]: error: maximum authentication attempts exceeded for invalid user user from 112.119.212.162 port 39412 ssh2 [preauth]
Jan 21 22:48:28 np0005591288.novalocal sshd-session[7328]: Disconnecting invalid user user 112.119.212.162 port 39412: Too many authentication failures [preauth]
Jan 21 22:48:30 np0005591288.novalocal sshd-session[7330]: Invalid user user from 112.119.212.162 port 39868
Jan 21 22:48:31 np0005591288.novalocal sshd-session[7330]: error: maximum authentication attempts exceeded for invalid user user from 112.119.212.162 port 39868 ssh2 [preauth]
Jan 21 22:48:31 np0005591288.novalocal sshd-session[7330]: Disconnecting invalid user user 112.119.212.162 port 39868: Too many authentication failures [preauth]
Jan 21 22:48:35 np0005591288.novalocal sshd-session[7332]: Invalid user user from 112.119.212.162 port 40306
Jan 21 22:48:36 np0005591288.novalocal sshd-session[7332]: Received disconnect from 112.119.212.162 port 40306:11: disconnected by user [preauth]
Jan 21 22:48:36 np0005591288.novalocal sshd-session[7332]: Disconnected from invalid user user 112.119.212.162 port 40306 [preauth]
Jan 21 22:48:38 np0005591288.novalocal sshd-session[7334]: Invalid user ftpuser from 112.119.212.162 port 40860
Jan 21 22:48:39 np0005591288.novalocal sshd-session[7334]: error: maximum authentication attempts exceeded for invalid user ftpuser from 112.119.212.162 port 40860 ssh2 [preauth]
Jan 21 22:48:39 np0005591288.novalocal sshd-session[7334]: Disconnecting invalid user ftpuser 112.119.212.162 port 40860: Too many authentication failures [preauth]
Jan 21 22:48:41 np0005591288.novalocal sshd-session[7336]: Invalid user ftpuser from 112.119.212.162 port 41302
Jan 21 22:48:42 np0005591288.novalocal sshd-session[7336]: error: maximum authentication attempts exceeded for invalid user ftpuser from 112.119.212.162 port 41302 ssh2 [preauth]
Jan 21 22:48:42 np0005591288.novalocal sshd-session[7336]: Disconnecting invalid user ftpuser 112.119.212.162 port 41302: Too many authentication failures [preauth]
Jan 21 22:48:45 np0005591288.novalocal sshd-session[7338]: Invalid user ftpuser from 112.119.212.162 port 41756
Jan 21 22:48:46 np0005591288.novalocal sshd-session[7338]: Received disconnect from 112.119.212.162 port 41756:11: disconnected by user [preauth]
Jan 21 22:48:46 np0005591288.novalocal sshd-session[7338]: Disconnected from invalid user ftpuser 112.119.212.162 port 41756 [preauth]
Jan 21 22:48:48 np0005591288.novalocal sshd-session[7340]: Invalid user test1 from 112.119.212.162 port 42230
Jan 21 22:48:49 np0005591288.novalocal sshd-session[7340]: error: maximum authentication attempts exceeded for invalid user test1 from 112.119.212.162 port 42230 ssh2 [preauth]
Jan 21 22:48:49 np0005591288.novalocal sshd-session[7340]: Disconnecting invalid user test1 112.119.212.162 port 42230: Too many authentication failures [preauth]
Jan 21 22:48:51 np0005591288.novalocal sshd-session[7344]: Accepted publickey for zuul from 38.102.83.114 port 50522 ssh2: RSA SHA256:gO0M839svU6fVamuNUCiB4QTUcucusiR8OAS6SArSuQ
Jan 21 22:48:51 np0005591288.novalocal systemd-logind[786]: New session 3 of user zuul.
Jan 21 22:48:51 np0005591288.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 21 22:48:51 np0005591288.novalocal sshd-session[7344]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 22:48:51 np0005591288.novalocal sudo[7423]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euhhcpaazdsdsucwqdbpcguukohctdwd ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 21 22:48:51 np0005591288.novalocal sudo[7423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:48:51 np0005591288.novalocal python3[7425]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 22:48:51 np0005591288.novalocal sshd-session[7342]: Invalid user test1 from 112.119.212.162 port 42664
Jan 21 22:48:51 np0005591288.novalocal sudo[7423]: pam_unix(sudo:session): session closed for user root
Jan 21 22:48:51 np0005591288.novalocal sudo[7496]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubnmoxxrumdjotkinkkclgltoasgebni ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 21 22:48:51 np0005591288.novalocal sudo[7496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:48:52 np0005591288.novalocal python3[7498]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769035731.2634811-373-149719048349651/source _original_basename=tmpcdh_4nc9 follow=False checksum=10f2928cd5900a2ae7328644df3cf339f79373fd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:48:52 np0005591288.novalocal sudo[7496]: pam_unix(sudo:session): session closed for user root
Jan 21 22:48:52 np0005591288.novalocal sshd-session[7342]: error: maximum authentication attempts exceeded for invalid user test1 from 112.119.212.162 port 42664 ssh2 [preauth]
Jan 21 22:48:52 np0005591288.novalocal sshd-session[7342]: Disconnecting invalid user test1 112.119.212.162 port 42664: Too many authentication failures [preauth]
Jan 21 22:48:55 np0005591288.novalocal sshd-session[7523]: Invalid user test1 from 112.119.212.162 port 43104
Jan 21 22:48:55 np0005591288.novalocal sshd-session[7523]: Received disconnect from 112.119.212.162 port 43104:11: disconnected by user [preauth]
Jan 21 22:48:55 np0005591288.novalocal sshd-session[7523]: Disconnected from invalid user test1 112.119.212.162 port 43104 [preauth]
Jan 21 22:48:56 np0005591288.novalocal sshd-session[7347]: Connection closed by 38.102.83.114 port 50522
Jan 21 22:48:56 np0005591288.novalocal sshd-session[7344]: pam_unix(sshd:session): session closed for user zuul
Jan 21 22:48:56 np0005591288.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 21 22:48:56 np0005591288.novalocal systemd-logind[786]: Session 3 logged out. Waiting for processes to exit.
Jan 21 22:48:56 np0005591288.novalocal systemd-logind[786]: Removed session 3.
Jan 21 22:48:58 np0005591288.novalocal sshd-session[7525]: Invalid user test2 from 112.119.212.162 port 43458
Jan 21 22:48:59 np0005591288.novalocal sshd-session[7525]: error: maximum authentication attempts exceeded for invalid user test2 from 112.119.212.162 port 43458 ssh2 [preauth]
Jan 21 22:48:59 np0005591288.novalocal sshd-session[7525]: Disconnecting invalid user test2 112.119.212.162 port 43458: Too many authentication failures [preauth]
Jan 21 22:49:01 np0005591288.novalocal sshd-session[7527]: Invalid user test2 from 112.119.212.162 port 43944
Jan 21 22:49:02 np0005591288.novalocal sshd-session[7527]: error: maximum authentication attempts exceeded for invalid user test2 from 112.119.212.162 port 43944 ssh2 [preauth]
Jan 21 22:49:02 np0005591288.novalocal sshd-session[7527]: Disconnecting invalid user test2 112.119.212.162 port 43944: Too many authentication failures [preauth]
Jan 21 22:49:05 np0005591288.novalocal sshd-session[7529]: Invalid user test2 from 112.119.212.162 port 44404
Jan 21 22:49:05 np0005591288.novalocal sshd-session[7529]: Received disconnect from 112.119.212.162 port 44404:11: disconnected by user [preauth]
Jan 21 22:49:05 np0005591288.novalocal sshd-session[7529]: Disconnected from invalid user test2 112.119.212.162 port 44404 [preauth]
Jan 21 22:49:07 np0005591288.novalocal sshd-session[7531]: Invalid user ubuntu from 112.119.212.162 port 44776
Jan 21 22:49:09 np0005591288.novalocal sshd-session[7531]: error: maximum authentication attempts exceeded for invalid user ubuntu from 112.119.212.162 port 44776 ssh2 [preauth]
Jan 21 22:49:09 np0005591288.novalocal sshd-session[7531]: Disconnecting invalid user ubuntu 112.119.212.162 port 44776: Too many authentication failures [preauth]
Jan 21 22:49:12 np0005591288.novalocal sshd-session[7533]: Invalid user ubuntu from 112.119.212.162 port 45372
Jan 21 22:49:13 np0005591288.novalocal sshd-session[7533]: error: maximum authentication attempts exceeded for invalid user ubuntu from 112.119.212.162 port 45372 ssh2 [preauth]
Jan 21 22:49:13 np0005591288.novalocal sshd-session[7533]: Disconnecting invalid user ubuntu 112.119.212.162 port 45372: Too many authentication failures [preauth]
Jan 21 22:49:15 np0005591288.novalocal sshd-session[7535]: Invalid user ubuntu from 112.119.212.162 port 45890
Jan 21 22:49:16 np0005591288.novalocal sshd-session[7535]: Received disconnect from 112.119.212.162 port 45890:11: disconnected by user [preauth]
Jan 21 22:49:16 np0005591288.novalocal sshd-session[7535]: Disconnected from invalid user ubuntu 112.119.212.162 port 45890 [preauth]
Jan 21 22:49:19 np0005591288.novalocal sshd-session[7537]: Invalid user pi from 112.119.212.162 port 46298
Jan 21 22:49:20 np0005591288.novalocal sshd-session[7537]: Received disconnect from 112.119.212.162 port 46298:11: disconnected by user [preauth]
Jan 21 22:49:20 np0005591288.novalocal sshd-session[7537]: Disconnected from invalid user pi 112.119.212.162 port 46298 [preauth]
Jan 21 22:49:23 np0005591288.novalocal sshd-session[7539]: Invalid user baikal from 112.119.212.162 port 46788
Jan 21 22:49:23 np0005591288.novalocal sshd-session[7539]: Received disconnect from 112.119.212.162 port 46788:11: disconnected by user [preauth]
Jan 21 22:49:23 np0005591288.novalocal sshd-session[7539]: Disconnected from invalid user baikal 112.119.212.162 port 46788 [preauth]
Jan 21 22:50:02 np0005591288.novalocal systemd[4308]: Created slice User Background Tasks Slice.
Jan 21 22:50:02 np0005591288.novalocal systemd[4308]: Starting Cleanup of User's Temporary Files and Directories...
Jan 21 22:50:02 np0005591288.novalocal systemd[4308]: Finished Cleanup of User's Temporary Files and Directories.
Jan 21 22:54:11 np0005591288.novalocal sshd-session[7546]: Accepted publickey for zuul from 38.102.83.114 port 41178 ssh2: RSA SHA256:gO0M839svU6fVamuNUCiB4QTUcucusiR8OAS6SArSuQ
Jan 21 22:54:11 np0005591288.novalocal systemd-logind[786]: New session 4 of user zuul.
Jan 21 22:54:11 np0005591288.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 21 22:54:11 np0005591288.novalocal sshd-session[7546]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 22:54:11 np0005591288.novalocal sudo[7573]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqsmypsoqqzfpqmfmhyfessqcvhuqtae ; /usr/bin/python3'
Jan 21 22:54:11 np0005591288.novalocal sudo[7573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:54:12 np0005591288.novalocal python3[7575]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-ae30-1707-000000000ca2-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 22:54:12 np0005591288.novalocal sudo[7573]: pam_unix(sudo:session): session closed for user root
Jan 21 22:54:12 np0005591288.novalocal sudo[7602]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxdfwtluhbebezrghvgafpajpzjuizqp ; /usr/bin/python3'
Jan 21 22:54:12 np0005591288.novalocal sudo[7602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:54:12 np0005591288.novalocal python3[7604]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:54:12 np0005591288.novalocal sudo[7602]: pam_unix(sudo:session): session closed for user root
Jan 21 22:54:12 np0005591288.novalocal sudo[7628]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lszduwayowmfygumbsbzhoeutqxmrikj ; /usr/bin/python3'
Jan 21 22:54:12 np0005591288.novalocal sudo[7628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:54:13 np0005591288.novalocal python3[7630]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:54:13 np0005591288.novalocal sudo[7628]: pam_unix(sudo:session): session closed for user root
Jan 21 22:54:13 np0005591288.novalocal sudo[7654]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xblantojuexrahensrzfznonnnxdfvpt ; /usr/bin/python3'
Jan 21 22:54:13 np0005591288.novalocal sudo[7654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:54:13 np0005591288.novalocal python3[7656]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:54:13 np0005591288.novalocal sudo[7654]: pam_unix(sudo:session): session closed for user root
Jan 21 22:54:13 np0005591288.novalocal sudo[7680]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqmizacavpwzgkdkrnadmlcwwgquzcsa ; /usr/bin/python3'
Jan 21 22:54:13 np0005591288.novalocal sudo[7680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:54:13 np0005591288.novalocal python3[7682]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:54:13 np0005591288.novalocal sudo[7680]: pam_unix(sudo:session): session closed for user root
Jan 21 22:54:13 np0005591288.novalocal sudo[7706]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcmcnihquasznkhxmfxoudgxyzmkxbgy ; /usr/bin/python3'
Jan 21 22:54:13 np0005591288.novalocal sudo[7706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:54:14 np0005591288.novalocal python3[7708]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:54:14 np0005591288.novalocal sudo[7706]: pam_unix(sudo:session): session closed for user root
Jan 21 22:54:14 np0005591288.novalocal sudo[7784]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azhfyudvbdclkvfihvyswlurxjcwhekh ; /usr/bin/python3'
Jan 21 22:54:14 np0005591288.novalocal sudo[7784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:54:14 np0005591288.novalocal python3[7786]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 22:54:14 np0005591288.novalocal sudo[7784]: pam_unix(sudo:session): session closed for user root
Jan 21 22:54:14 np0005591288.novalocal sudo[7857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfzizuovkbyqnbbqyeyyzegxgxrgjpwb ; /usr/bin/python3'
Jan 21 22:54:14 np0005591288.novalocal sudo[7857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:54:14 np0005591288.novalocal python3[7859]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769036054.2921777-363-9822344420747/source _original_basename=tmp0zpk_z9z follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:54:15 np0005591288.novalocal sudo[7857]: pam_unix(sudo:session): session closed for user root
Jan 21 22:54:15 np0005591288.novalocal sudo[7907]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meugtdstvwnnjxtlscwdkzzedvnvjxqd ; /usr/bin/python3'
Jan 21 22:54:15 np0005591288.novalocal sudo[7907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:54:15 np0005591288.novalocal python3[7909]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 22:54:15 np0005591288.novalocal systemd[1]: Reloading.
Jan 21 22:54:15 np0005591288.novalocal systemd-rc-local-generator[7929]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 22:54:16 np0005591288.novalocal sudo[7907]: pam_unix(sudo:session): session closed for user root
Jan 21 22:54:17 np0005591288.novalocal sudo[7963]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmrqcutjencosjmideeecylepiyykcbj ; /usr/bin/python3'
Jan 21 22:54:17 np0005591288.novalocal sudo[7963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:54:17 np0005591288.novalocal python3[7965]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 21 22:54:17 np0005591288.novalocal sudo[7963]: pam_unix(sudo:session): session closed for user root
Jan 21 22:54:19 np0005591288.novalocal sudo[7989]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybjwufgtypbvzlzeeaqzdmfpkilomena ; /usr/bin/python3'
Jan 21 22:54:19 np0005591288.novalocal sudo[7989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:54:19 np0005591288.novalocal python3[7991]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 22:54:19 np0005591288.novalocal sudo[7989]: pam_unix(sudo:session): session closed for user root
Jan 21 22:54:19 np0005591288.novalocal sudo[8017]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxhxescjdwfhbrxytpevqmyrcyfckifd ; /usr/bin/python3'
Jan 21 22:54:19 np0005591288.novalocal sudo[8017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:54:20 np0005591288.novalocal python3[8019]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 22:54:20 np0005591288.novalocal sudo[8017]: pam_unix(sudo:session): session closed for user root
Jan 21 22:54:20 np0005591288.novalocal sudo[8045]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uenexxgtuehqgnxbexlfvmrgjgsrjybd ; /usr/bin/python3'
Jan 21 22:54:20 np0005591288.novalocal sudo[8045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:54:20 np0005591288.novalocal python3[8047]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 22:54:20 np0005591288.novalocal sudo[8045]: pam_unix(sudo:session): session closed for user root
Jan 21 22:54:20 np0005591288.novalocal sudo[8073]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-watuuujlktuehwsdmscljkwygkcqtgvl ; /usr/bin/python3'
Jan 21 22:54:20 np0005591288.novalocal sudo[8073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:54:20 np0005591288.novalocal python3[8075]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 22:54:20 np0005591288.novalocal sudo[8073]: pam_unix(sudo:session): session closed for user root
Jan 21 22:54:21 np0005591288.novalocal python3[8102]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-ae30-1707-000000000ca9-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 22:54:21 np0005591288.novalocal python3[8132]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 22:54:24 np0005591288.novalocal sshd-session[7549]: Connection closed by 38.102.83.114 port 41178
Jan 21 22:54:24 np0005591288.novalocal sshd-session[7546]: pam_unix(sshd:session): session closed for user zuul
Jan 21 22:54:24 np0005591288.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 21 22:54:24 np0005591288.novalocal systemd[1]: session-4.scope: Consumed 4.372s CPU time.
Jan 21 22:54:24 np0005591288.novalocal systemd-logind[786]: Session 4 logged out. Waiting for processes to exit.
Jan 21 22:54:24 np0005591288.novalocal systemd-logind[786]: Removed session 4.
Jan 21 22:54:26 np0005591288.novalocal sshd-session[8136]: Accepted publickey for zuul from 38.102.83.114 port 41258 ssh2: RSA SHA256:gO0M839svU6fVamuNUCiB4QTUcucusiR8OAS6SArSuQ
Jan 21 22:54:26 np0005591288.novalocal systemd-logind[786]: New session 5 of user zuul.
Jan 21 22:54:26 np0005591288.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 21 22:54:26 np0005591288.novalocal sshd-session[8136]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 22:54:26 np0005591288.novalocal sudo[8163]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wywihkoybnsvmkjpzqraskyognxrbrti ; /usr/bin/python3'
Jan 21 22:54:26 np0005591288.novalocal sudo[8163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:54:26 np0005591288.novalocal python3[8165]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 22:54:32 np0005591288.novalocal setsebool[8207]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 21 22:54:32 np0005591288.novalocal setsebool[8207]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 21 22:54:43 np0005591288.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 21 22:54:43 np0005591288.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 22:54:43 np0005591288.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 21 22:54:43 np0005591288.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 22:54:43 np0005591288.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 21 22:54:43 np0005591288.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 22:54:43 np0005591288.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 22:54:43 np0005591288.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 22:54:53 np0005591288.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 21 22:54:53 np0005591288.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 22:54:53 np0005591288.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 21 22:54:53 np0005591288.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 22:54:53 np0005591288.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 21 22:54:53 np0005591288.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 22:54:53 np0005591288.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 22:54:53 np0005591288.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 22:55:10 np0005591288.novalocal dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 21 22:55:11 np0005591288.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 22:55:11 np0005591288.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 21 22:55:11 np0005591288.novalocal systemd[1]: Reloading.
Jan 21 22:55:11 np0005591288.novalocal systemd-rc-local-generator[8977]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 22:55:11 np0005591288.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 22:55:12 np0005591288.novalocal sudo[8163]: pam_unix(sudo:session): session closed for user root
Jan 21 22:55:34 np0005591288.novalocal python3[20955]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-66b1-910a-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 22:55:35 np0005591288.novalocal kernel: evm: overlay not supported
Jan 21 22:55:35 np0005591288.novalocal systemd[4308]: Starting D-Bus User Message Bus...
Jan 21 22:55:35 np0005591288.novalocal dbus-broker-launch[21465]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 21 22:55:35 np0005591288.novalocal dbus-broker-launch[21465]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 21 22:55:35 np0005591288.novalocal systemd[4308]: Started D-Bus User Message Bus.
Jan 21 22:55:35 np0005591288.novalocal dbus-broker-lau[21465]: Ready
Jan 21 22:55:35 np0005591288.novalocal systemd[4308]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 21 22:55:35 np0005591288.novalocal systemd[4308]: Created slice Slice /user.
Jan 21 22:55:35 np0005591288.novalocal systemd[4308]: podman-21399.scope: unit configures an IP firewall, but not running as root.
Jan 21 22:55:35 np0005591288.novalocal systemd[4308]: (This warning is only shown for the first unit using IP firewalling.)
Jan 21 22:55:35 np0005591288.novalocal systemd[4308]: Started podman-21399.scope.
Jan 21 22:55:35 np0005591288.novalocal systemd[4308]: Started podman-pause-fb5ac3ac.scope.
Jan 21 22:55:36 np0005591288.novalocal sudo[21930]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bergsognflzcixkhdjmysusndaggvdpa ; /usr/bin/python3'
Jan 21 22:55:36 np0005591288.novalocal sudo[21930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:55:36 np0005591288.novalocal python3[21944]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.27:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.27:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:55:36 np0005591288.novalocal python3[21944]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 21 22:55:36 np0005591288.novalocal sudo[21930]: pam_unix(sudo:session): session closed for user root
Jan 21 22:55:36 np0005591288.novalocal sshd-session[8139]: Connection closed by 38.102.83.114 port 41258
Jan 21 22:55:36 np0005591288.novalocal sshd-session[8136]: pam_unix(sshd:session): session closed for user zuul
Jan 21 22:55:36 np0005591288.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 21 22:55:36 np0005591288.novalocal systemd[1]: session-5.scope: Consumed 42.921s CPU time.
Jan 21 22:55:36 np0005591288.novalocal systemd-logind[786]: Session 5 logged out. Waiting for processes to exit.
Jan 21 22:55:36 np0005591288.novalocal systemd-logind[786]: Removed session 5.
Jan 21 22:55:55 np0005591288.novalocal sshd-session[29166]: Unable to negotiate with 38.102.83.184 port 47328: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 21 22:55:55 np0005591288.novalocal sshd-session[29164]: Connection closed by 38.102.83.184 port 47312 [preauth]
Jan 21 22:55:55 np0005591288.novalocal sshd-session[29161]: Connection closed by 38.102.83.184 port 47318 [preauth]
Jan 21 22:55:55 np0005591288.novalocal sshd-session[29167]: Unable to negotiate with 38.102.83.184 port 47338: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 21 22:55:55 np0005591288.novalocal sshd-session[29168]: Unable to negotiate with 38.102.83.184 port 47348: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 21 22:55:56 np0005591288.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 22:55:56 np0005591288.novalocal systemd[1]: Finished man-db-cache-update.service.
Jan 21 22:55:56 np0005591288.novalocal systemd[1]: man-db-cache-update.service: Consumed 53.946s CPU time.
Jan 21 22:55:56 np0005591288.novalocal systemd[1]: run-rb32e4d61188d4132a86fbb2e9a60ea76.service: Deactivated successfully.
Jan 21 22:56:00 np0005591288.novalocal sshd-session[29662]: Accepted publickey for zuul from 38.102.83.114 port 50094 ssh2: RSA SHA256:gO0M839svU6fVamuNUCiB4QTUcucusiR8OAS6SArSuQ
Jan 21 22:56:00 np0005591288.novalocal systemd-logind[786]: New session 6 of user zuul.
Jan 21 22:56:00 np0005591288.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 21 22:56:00 np0005591288.novalocal sshd-session[29662]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 22:56:00 np0005591288.novalocal python3[29689]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCtz4qYEBIu21YDO+IIJ+JTFfH78nIGfoGczvyTMpp3LJkQx63vQefMf9koI0TwZKXS2oixR0ZibLv1qNlzoPUw= zuul@np0005591287.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:56:00 np0005591288.novalocal sudo[29713]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvmfnkigtgxgfyaafitwdrhjujhfyqhi ; /usr/bin/python3'
Jan 21 22:56:00 np0005591288.novalocal sudo[29713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:56:01 np0005591288.novalocal python3[29715]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCtz4qYEBIu21YDO+IIJ+JTFfH78nIGfoGczvyTMpp3LJkQx63vQefMf9koI0TwZKXS2oixR0ZibLv1qNlzoPUw= zuul@np0005591287.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:56:01 np0005591288.novalocal sudo[29713]: pam_unix(sudo:session): session closed for user root
Jan 21 22:56:02 np0005591288.novalocal sudo[29739]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zplxmxkzwbhwitboshpgiylnfntzozqf ; /usr/bin/python3'
Jan 21 22:56:02 np0005591288.novalocal sudo[29739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:56:02 np0005591288.novalocal python3[29741]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005591288.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 21 22:56:02 np0005591288.novalocal useradd[29743]: new group: name=cloud-admin, GID=1002
Jan 21 22:56:02 np0005591288.novalocal useradd[29743]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 21 22:56:02 np0005591288.novalocal sudo[29739]: pam_unix(sudo:session): session closed for user root
Jan 21 22:56:02 np0005591288.novalocal sudo[29773]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfyvbkabukukptkzenfwmvcqlajprtnq ; /usr/bin/python3'
Jan 21 22:56:02 np0005591288.novalocal sudo[29773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:56:02 np0005591288.novalocal python3[29775]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCtz4qYEBIu21YDO+IIJ+JTFfH78nIGfoGczvyTMpp3LJkQx63vQefMf9koI0TwZKXS2oixR0ZibLv1qNlzoPUw= zuul@np0005591287.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 22:56:02 np0005591288.novalocal sudo[29773]: pam_unix(sudo:session): session closed for user root
Jan 21 22:56:03 np0005591288.novalocal sudo[29851]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvhsaxjdefodsuimneorjoumkxhlxopj ; /usr/bin/python3'
Jan 21 22:56:03 np0005591288.novalocal sudo[29851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:56:03 np0005591288.novalocal python3[29853]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 22:56:03 np0005591288.novalocal sudo[29851]: pam_unix(sudo:session): session closed for user root
Jan 21 22:56:03 np0005591288.novalocal sudo[29924]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbchlrlnwouxjwluimwleckggcqgzcpt ; /usr/bin/python3'
Jan 21 22:56:03 np0005591288.novalocal sudo[29924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:56:03 np0005591288.novalocal python3[29926]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769036162.9901607-167-218490682163173/source _original_basename=tmpwmzjz04n follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 22:56:03 np0005591288.novalocal sudo[29924]: pam_unix(sudo:session): session closed for user root
Jan 21 22:56:04 np0005591288.novalocal sudo[29974]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdhgysbenyuxhdolrsblvcswicutglmg ; /usr/bin/python3'
Jan 21 22:56:04 np0005591288.novalocal sudo[29974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 22:56:04 np0005591288.novalocal python3[29976]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 21 22:56:04 np0005591288.novalocal systemd[1]: Starting Hostname Service...
Jan 21 22:56:04 np0005591288.novalocal systemd[1]: Started Hostname Service.
Jan 21 22:56:04 np0005591288.novalocal systemd-hostnamed[29980]: Changed pretty hostname to 'compute-0'
Jan 21 22:56:04 compute-0 systemd-hostnamed[29980]: Hostname set to <compute-0> (static)
Jan 21 22:56:04 compute-0 NetworkManager[7194]: <info>  [1769036164.8824] hostname: static hostname changed from "np0005591288.novalocal" to "compute-0"
Jan 21 22:56:04 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 22:56:04 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 22:56:04 compute-0 sudo[29974]: pam_unix(sudo:session): session closed for user root
Jan 21 22:56:05 compute-0 sshd-session[29665]: Connection closed by 38.102.83.114 port 50094
Jan 21 22:56:05 compute-0 sshd-session[29662]: pam_unix(sshd:session): session closed for user zuul
Jan 21 22:56:05 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 21 22:56:05 compute-0 systemd[1]: session-6.scope: Consumed 2.421s CPU time.
Jan 21 22:56:05 compute-0 systemd-logind[786]: Session 6 logged out. Waiting for processes to exit.
Jan 21 22:56:05 compute-0 systemd-logind[786]: Removed session 6.
Jan 21 22:56:14 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 22:56:34 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 21 22:59:52 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 21 22:59:52 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 21 22:59:52 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 21 22:59:52 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 21 22:59:57 compute-0 sshd-session[30000]: Accepted publickey for zuul from 38.102.83.184 port 52548 ssh2: RSA SHA256:gO0M839svU6fVamuNUCiB4QTUcucusiR8OAS6SArSuQ
Jan 21 22:59:57 compute-0 systemd-logind[786]: New session 7 of user zuul.
Jan 21 22:59:57 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 21 22:59:57 compute-0 sshd-session[30000]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 22:59:58 compute-0 python3[30076]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:00:00 compute-0 sudo[30190]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhxdjiadzmdytpgpworsiahcxetsadzh ; /usr/bin/python3'
Jan 21 23:00:00 compute-0 sudo[30190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:00:00 compute-0 python3[30192]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 23:00:00 compute-0 sudo[30190]: pam_unix(sudo:session): session closed for user root
Jan 21 23:00:00 compute-0 sudo[30263]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cptyacntescbslnrhnajdbhzhexgqpcj ; /usr/bin/python3'
Jan 21 23:00:00 compute-0 sudo[30263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:00:00 compute-0 python3[30265]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769036400.0031137-33955-265919546401678/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:00:00 compute-0 sudo[30263]: pam_unix(sudo:session): session closed for user root
Jan 21 23:00:00 compute-0 sudo[30289]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmayekwalhgeplogmztvganzsqdjarit ; /usr/bin/python3'
Jan 21 23:00:00 compute-0 sudo[30289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:00:01 compute-0 python3[30291]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 23:00:01 compute-0 sudo[30289]: pam_unix(sudo:session): session closed for user root
Jan 21 23:00:01 compute-0 sudo[30362]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjhbnrhwrwuawkqmttzzwcupjnkgfntv ; /usr/bin/python3'
Jan 21 23:00:01 compute-0 sudo[30362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:00:01 compute-0 python3[30364]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769036400.0031137-33955-265919546401678/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:00:01 compute-0 sudo[30362]: pam_unix(sudo:session): session closed for user root
Jan 21 23:00:01 compute-0 sudo[30388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sviidxkfecaconjcycdlupwxrpuhsdtq ; /usr/bin/python3'
Jan 21 23:00:01 compute-0 sudo[30388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:00:01 compute-0 python3[30390]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 23:00:01 compute-0 sudo[30388]: pam_unix(sudo:session): session closed for user root
Jan 21 23:00:01 compute-0 sudo[30461]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbckzbtcnlagqdzciqtkssrdjeejtrlb ; /usr/bin/python3'
Jan 21 23:00:01 compute-0 sudo[30461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:00:01 compute-0 python3[30463]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769036400.0031137-33955-265919546401678/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:00:02 compute-0 sudo[30461]: pam_unix(sudo:session): session closed for user root
Jan 21 23:00:02 compute-0 sudo[30487]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybpglxtvjqdbdrcvzbzsyteklzwferpj ; /usr/bin/python3'
Jan 21 23:00:02 compute-0 sudo[30487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:00:02 compute-0 python3[30489]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 23:00:02 compute-0 sudo[30487]: pam_unix(sudo:session): session closed for user root
Jan 21 23:00:02 compute-0 sudo[30560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svqqjvdlhyhzdwdzhuuaejweqngwcwfb ; /usr/bin/python3'
Jan 21 23:00:02 compute-0 sudo[30560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:00:02 compute-0 python3[30562]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769036400.0031137-33955-265919546401678/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:00:02 compute-0 sudo[30560]: pam_unix(sudo:session): session closed for user root
Jan 21 23:00:02 compute-0 sudo[30586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjuatokdtmbstmgujvcjmzmsyzoreowc ; /usr/bin/python3'
Jan 21 23:00:02 compute-0 sudo[30586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:00:02 compute-0 python3[30588]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 23:00:02 compute-0 sudo[30586]: pam_unix(sudo:session): session closed for user root
Jan 21 23:00:03 compute-0 sudo[30659]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stbsgjqxirowshtqmonovwjqqfsqtqpi ; /usr/bin/python3'
Jan 21 23:00:03 compute-0 sudo[30659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:00:03 compute-0 python3[30661]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769036400.0031137-33955-265919546401678/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:00:03 compute-0 sudo[30659]: pam_unix(sudo:session): session closed for user root
Jan 21 23:00:03 compute-0 sudo[30685]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fumehaggnyldbfdzxphxvufegjbfllzl ; /usr/bin/python3'
Jan 21 23:00:03 compute-0 sudo[30685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:00:03 compute-0 python3[30687]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 23:00:03 compute-0 sudo[30685]: pam_unix(sudo:session): session closed for user root
Jan 21 23:00:03 compute-0 sudo[30758]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruzrqhwriioyvttdtdtbnfatiddxbxxv ; /usr/bin/python3'
Jan 21 23:00:03 compute-0 sudo[30758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:00:04 compute-0 python3[30760]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769036400.0031137-33955-265919546401678/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:00:04 compute-0 sudo[30758]: pam_unix(sudo:session): session closed for user root
Jan 21 23:00:04 compute-0 sudo[30784]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipezyvoubogfwivelxuwyrklgfftcfpx ; /usr/bin/python3'
Jan 21 23:00:04 compute-0 sudo[30784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:00:04 compute-0 python3[30786]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 23:00:04 compute-0 sudo[30784]: pam_unix(sudo:session): session closed for user root
Jan 21 23:00:04 compute-0 sudo[30857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yknyovfqqxxshmukkcqwvfozrpeupfrq ; /usr/bin/python3'
Jan 21 23:00:04 compute-0 sudo[30857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:00:04 compute-0 python3[30859]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769036400.0031137-33955-265919546401678/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:00:04 compute-0 sudo[30857]: pam_unix(sudo:session): session closed for user root
Jan 21 23:00:07 compute-0 sshd-session[30884]: Connection closed by 192.168.122.11 port 49032 [preauth]
Jan 21 23:00:07 compute-0 sshd-session[30885]: Unable to negotiate with 192.168.122.11 port 49040: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 21 23:00:07 compute-0 sshd-session[30886]: Connection closed by 192.168.122.11 port 49022 [preauth]
Jan 21 23:00:07 compute-0 sshd-session[30889]: Unable to negotiate with 192.168.122.11 port 49066: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 21 23:00:07 compute-0 sshd-session[30887]: Unable to negotiate with 192.168.122.11 port 49056: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 21 23:00:16 compute-0 python3[30917]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:01:01 compute-0 CROND[30921]: (root) CMD (run-parts /etc/cron.hourly)
Jan 21 23:01:01 compute-0 run-parts[30924]: (/etc/cron.hourly) starting 0anacron
Jan 21 23:01:01 compute-0 anacron[30932]: Anacron started on 2026-01-21
Jan 21 23:01:01 compute-0 anacron[30932]: Will run job `cron.daily' in 23 min.
Jan 21 23:01:01 compute-0 anacron[30932]: Will run job `cron.weekly' in 43 min.
Jan 21 23:01:01 compute-0 anacron[30932]: Will run job `cron.monthly' in 63 min.
Jan 21 23:01:01 compute-0 anacron[30932]: Jobs will be executed sequentially
Jan 21 23:01:01 compute-0 run-parts[30934]: (/etc/cron.hourly) finished 0anacron
Jan 21 23:01:01 compute-0 CROND[30920]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 21 23:04:11 compute-0 sshd-session[30937]: Connection closed by 203.83.238.251 port 36030
Jan 21 23:05:16 compute-0 sshd-session[30003]: Received disconnect from 38.102.83.184 port 52548:11: disconnected by user
Jan 21 23:05:16 compute-0 sshd-session[30003]: Disconnected from user zuul 38.102.83.184 port 52548
Jan 21 23:05:16 compute-0 sshd-session[30000]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:05:16 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 21 23:05:16 compute-0 systemd[1]: session-7.scope: Consumed 5.204s CPU time.
Jan 21 23:05:16 compute-0 systemd-logind[786]: Session 7 logged out. Waiting for processes to exit.
Jan 21 23:05:16 compute-0 systemd-logind[786]: Removed session 7.
Jan 21 23:13:43 compute-0 sshd-session[30943]: Accepted publickey for zuul from 192.168.122.30 port 42048 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:13:43 compute-0 systemd-logind[786]: New session 8 of user zuul.
Jan 21 23:13:43 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 21 23:13:43 compute-0 sshd-session[30943]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:13:44 compute-0 python3.9[31096]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:13:45 compute-0 sudo[31275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnoswhmeoizfetqavilgitqvllnqsixs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037225.5624993-56-213223998676746/AnsiballZ_command.py'
Jan 21 23:13:46 compute-0 sudo[31275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:13:46 compute-0 python3.9[31277]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:13:53 compute-0 sudo[31275]: pam_unix(sudo:session): session closed for user root
Jan 21 23:13:54 compute-0 sshd-session[30946]: Connection closed by 192.168.122.30 port 42048
Jan 21 23:13:54 compute-0 sshd-session[30943]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:13:54 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 21 23:13:54 compute-0 systemd[1]: session-8.scope: Consumed 8.234s CPU time.
Jan 21 23:13:54 compute-0 systemd-logind[786]: Session 8 logged out. Waiting for processes to exit.
Jan 21 23:13:54 compute-0 systemd-logind[786]: Removed session 8.
Jan 21 23:14:09 compute-0 sshd-session[31334]: Accepted publickey for zuul from 192.168.122.30 port 48734 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:14:09 compute-0 systemd[1]: Starting dnf makecache...
Jan 21 23:14:09 compute-0 systemd-logind[786]: New session 9 of user zuul.
Jan 21 23:14:09 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 21 23:14:09 compute-0 sshd-session[31334]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:14:09 compute-0 dnf[31336]: Failed determining last makecache time.
Jan 21 23:14:09 compute-0 dnf[31336]: delorean-openstack-barbican-42b4c41831408a8e323 191 kB/s |  13 kB     00:00
Jan 21 23:14:09 compute-0 dnf[31336]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 1.5 MB/s |  65 kB     00:00
Jan 21 23:14:10 compute-0 dnf[31336]: delorean-openstack-cinder-1c00d6490d88e436f26ef 750 kB/s |  32 kB     00:00
Jan 21 23:14:10 compute-0 dnf[31336]: delorean-python-stevedore-c4acc5639fd2329372142 2.5 MB/s | 131 kB     00:00
Jan 21 23:14:10 compute-0 dnf[31336]: delorean-python-cloudkitty-tests-tempest-2c80f8 685 kB/s |  32 kB     00:00
Jan 21 23:14:10 compute-0 dnf[31336]: delorean-os-refresh-config-9bfc52b5049be2d8de61 7.1 MB/s | 349 kB     00:00
Jan 21 23:14:10 compute-0 dnf[31336]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 979 kB/s |  42 kB     00:00
Jan 21 23:14:10 compute-0 dnf[31336]: delorean-python-designate-tests-tempest-347fdbc 331 kB/s |  18 kB     00:00
Jan 21 23:14:10 compute-0 dnf[31336]: delorean-openstack-glance-1fd12c29b339f30fe823e 313 kB/s |  18 kB     00:00
Jan 21 23:14:10 compute-0 python3.9[31510]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 21 23:14:10 compute-0 dnf[31336]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 569 kB/s |  29 kB     00:00
Jan 21 23:14:10 compute-0 dnf[31336]: delorean-openstack-manila-3c01b7181572c95dac462 506 kB/s |  25 kB     00:00
Jan 21 23:14:10 compute-0 dnf[31336]: delorean-python-whitebox-neutron-tests-tempest- 3.3 MB/s | 154 kB     00:00
Jan 21 23:14:10 compute-0 dnf[31336]: delorean-openstack-octavia-ba397f07a7331190208c 586 kB/s |  26 kB     00:00
Jan 21 23:14:10 compute-0 dnf[31336]: delorean-openstack-watcher-c014f81a8647287f6dcc 357 kB/s |  16 kB     00:00
Jan 21 23:14:10 compute-0 dnf[31336]: delorean-ansible-config_template-5ccaa22121a7ff 177 kB/s | 7.4 kB     00:00
Jan 21 23:14:10 compute-0 dnf[31336]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 3.0 MB/s | 144 kB     00:00
Jan 21 23:14:10 compute-0 dnf[31336]: delorean-openstack-swift-dc98a8463506ac520c469a 316 kB/s |  14 kB     00:00
Jan 21 23:14:11 compute-0 dnf[31336]: delorean-python-tempestconf-8515371b7cceebd4282 1.2 MB/s |  53 kB     00:00
Jan 21 23:14:11 compute-0 dnf[31336]: delorean-openstack-heat-ui-013accbfd179753bc3f0 2.2 MB/s |  96 kB     00:00
Jan 21 23:14:11 compute-0 dnf[31336]: CentOS Stream 9 - BaseOS                         13 MB/s | 8.9 MB     00:00
Jan 21 23:14:12 compute-0 python3.9[31724]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:14:12 compute-0 sudo[31874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vabyidoirzoqhtsoiyqacitwlquivrnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037252.3445423-93-155187457243673/AnsiballZ_command.py'
Jan 21 23:14:12 compute-0 sudo[31874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:14:12 compute-0 python3.9[31876]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:14:12 compute-0 sudo[31874]: pam_unix(sudo:session): session closed for user root
Jan 21 23:14:13 compute-0 sudo[32032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnhhvxsufwwgmwmipcadfskccpqbsakv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037253.4270148-129-212104781336612/AnsiballZ_stat.py'
Jan 21 23:14:13 compute-0 sudo[32032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:14:14 compute-0 python3.9[32034]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:14:14 compute-0 sudo[32032]: pam_unix(sudo:session): session closed for user root
Jan 21 23:14:14 compute-0 sudo[32184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpgudqwircjfmjnuelmnfpfgklittogs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037254.3252711-153-246382473334964/AnsiballZ_file.py'
Jan 21 23:14:14 compute-0 sudo[32184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:14:15 compute-0 dnf[31336]: CentOS Stream 9 - AppStream                      14 MB/s |  26 MB     00:01
Jan 21 23:14:15 compute-0 python3.9[32186]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:14:15 compute-0 sudo[32184]: pam_unix(sudo:session): session closed for user root
Jan 21 23:14:15 compute-0 sudo[32337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyjjmnhmwpzeomfpwfunfjzdqlymhlst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037255.2470849-177-153147441097737/AnsiballZ_stat.py'
Jan 21 23:14:15 compute-0 sudo[32337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:14:15 compute-0 python3.9[32339]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:14:15 compute-0 sudo[32337]: pam_unix(sudo:session): session closed for user root
Jan 21 23:14:16 compute-0 sudo[32460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksvnrpauqtgvokutezofogesvdsfohvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037255.2470849-177-153147441097737/AnsiballZ_copy.py'
Jan 21 23:14:16 compute-0 sudo[32460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:14:16 compute-0 python3.9[32462]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769037255.2470849-177-153147441097737/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:14:16 compute-0 sudo[32460]: pam_unix(sudo:session): session closed for user root
Jan 21 23:14:17 compute-0 sudo[32612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lybpvfpaypkbhhvifenlqukmhyyhvhis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037256.760348-222-48558239387727/AnsiballZ_setup.py'
Jan 21 23:14:17 compute-0 sudo[32612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:14:17 compute-0 python3.9[32614]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:14:17 compute-0 sudo[32612]: pam_unix(sudo:session): session closed for user root
Jan 21 23:14:18 compute-0 sudo[32768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otfypehihddevoopjbescgcgusgxkrkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037257.8514452-246-130939917735914/AnsiballZ_file.py'
Jan 21 23:14:18 compute-0 sudo[32768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:14:18 compute-0 python3.9[32770]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:14:18 compute-0 sudo[32768]: pam_unix(sudo:session): session closed for user root
Jan 21 23:14:19 compute-0 sudo[32920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfbcywlhacetldfylkdsiejpwahppcbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037258.7157037-273-22971763409042/AnsiballZ_file.py'
Jan 21 23:14:19 compute-0 sudo[32920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:14:19 compute-0 python3.9[32922]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:14:19 compute-0 sudo[32920]: pam_unix(sudo:session): session closed for user root
Jan 21 23:14:20 compute-0 python3.9[33072]: ansible-ansible.builtin.service_facts Invoked
Jan 21 23:14:21 compute-0 dnf[31336]: CentOS Stream 9 - CRB                            11 MB/s | 7.6 MB     00:00
Jan 21 23:14:23 compute-0 dnf[31336]: CentOS Stream 9 - Extras packages               118 kB/s |  20 kB     00:00
Jan 21 23:14:23 compute-0 dnf[31336]: dlrn-antelope-testing                            17 MB/s | 1.1 MB     00:00
Jan 21 23:14:24 compute-0 dnf[31336]: dlrn-antelope-build-deps                        7.9 MB/s | 461 kB     00:00
Jan 21 23:14:24 compute-0 dnf[31336]: centos9-rabbitmq                                1.2 MB/s | 123 kB     00:00
Jan 21 23:14:24 compute-0 dnf[31336]: centos9-storage                                 1.3 MB/s | 415 kB     00:00
Jan 21 23:14:25 compute-0 dnf[31336]: centos9-opstools                                707 kB/s |  51 kB     00:00
Jan 21 23:14:25 compute-0 dnf[31336]: NFV SIG OpenvSwitch                             6.2 MB/s | 461 kB     00:00
Jan 21 23:14:26 compute-0 dnf[31336]: repo-setup-centos-appstream                      78 MB/s |  26 MB     00:00
Jan 21 23:14:27 compute-0 python3.9[33372]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:14:28 compute-0 python3.9[33522]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:14:29 compute-0 python3.9[33676]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:14:30 compute-0 sudo[33832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jghkpcrzliuzgxopzjmntlydciesqmug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037270.2893462-417-97699144432588/AnsiballZ_setup.py'
Jan 21 23:14:30 compute-0 sudo[33832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:14:30 compute-0 python3.9[33834]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:14:31 compute-0 sudo[33832]: pam_unix(sudo:session): session closed for user root
Jan 21 23:14:31 compute-0 sudo[33918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqogxwtddtganwsreuqslcmxlcmapmyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037270.2893462-417-97699144432588/AnsiballZ_dnf.py'
Jan 21 23:14:31 compute-0 sudo[33918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:14:31 compute-0 python3.9[33926]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:14:31 compute-0 dnf[31336]: repo-setup-centos-baseos                         31 MB/s | 8.9 MB     00:00
Jan 21 23:14:33 compute-0 dnf[31336]: repo-setup-centos-highavailability              9.3 MB/s | 744 kB     00:00
Jan 21 23:14:33 compute-0 dnf[31336]: repo-setup-centos-powertools                     50 MB/s | 7.6 MB     00:00
Jan 21 23:14:36 compute-0 dnf[31336]: Extra Packages for Enterprise Linux 9 - x86_64   23 MB/s |  20 MB     00:00
Jan 21 23:14:49 compute-0 dnf[31336]: Metadata cache created.
Jan 21 23:14:50 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 21 23:14:50 compute-0 systemd[1]: Finished dnf makecache.
Jan 21 23:14:50 compute-0 systemd[1]: dnf-makecache.service: Consumed 35.874s CPU time.
Jan 21 23:14:50 compute-0 systemd[1]: Reloading.
Jan 21 23:14:51 compute-0 systemd-rc-local-generator[34005]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:14:51 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 21 23:14:51 compute-0 systemd[1]: Reloading.
Jan 21 23:14:51 compute-0 systemd-rc-local-generator[34045]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:14:51 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 21 23:14:51 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 21 23:14:51 compute-0 systemd[1]: Reloading.
Jan 21 23:14:51 compute-0 systemd-rc-local-generator[34083]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:14:52 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 21 23:14:52 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 21 23:14:52 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 21 23:14:52 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 21 23:15:54 compute-0 kernel: SELinux:  Converting 2724 SID table entries...
Jan 21 23:15:54 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 23:15:54 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 21 23:15:54 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 23:15:54 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 21 23:15:54 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 23:15:54 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 23:15:54 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 23:15:55 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 21 23:15:55 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 23:15:55 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 23:15:55 compute-0 systemd[1]: Reloading.
Jan 21 23:15:55 compute-0 systemd-rc-local-generator[34405]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:15:55 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 23:15:56 compute-0 sudo[33918]: pam_unix(sudo:session): session closed for user root
Jan 21 23:15:56 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 23:15:56 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 23:15:56 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.316s CPU time.
Jan 21 23:15:56 compute-0 systemd[1]: run-rd28fbd30f051460a830734f85cb0ba2a.service: Deactivated successfully.
Jan 21 23:16:30 compute-0 sudo[35316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryuuddeilbgubzdxfpsahkcopmzhflic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037390.367213-453-115781139831007/AnsiballZ_command.py'
Jan 21 23:16:30 compute-0 sudo[35316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:16:30 compute-0 python3.9[35318]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:16:31 compute-0 sudo[35316]: pam_unix(sudo:session): session closed for user root
Jan 21 23:16:32 compute-0 sudo[35597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkfyozktarahpmipppafejasrmbklsqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037392.0369258-477-53969382943139/AnsiballZ_selinux.py'
Jan 21 23:16:32 compute-0 sudo[35597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:16:32 compute-0 python3.9[35599]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 21 23:16:32 compute-0 sudo[35597]: pam_unix(sudo:session): session closed for user root
Jan 21 23:16:33 compute-0 sudo[35749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqxqtdcrprggbaeyiavacnadialnxbkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037393.3783662-510-162645164293965/AnsiballZ_command.py'
Jan 21 23:16:33 compute-0 sudo[35749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:16:33 compute-0 python3.9[35751]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 21 23:16:34 compute-0 sudo[35749]: pam_unix(sudo:session): session closed for user root
Jan 21 23:16:36 compute-0 sudo[35902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykwisjosijabpofokxrenlxjheewfaxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037395.6908312-534-257048330724196/AnsiballZ_file.py'
Jan 21 23:16:36 compute-0 sudo[35902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:16:36 compute-0 python3.9[35904]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:16:36 compute-0 sudo[35902]: pam_unix(sudo:session): session closed for user root
Jan 21 23:16:37 compute-0 irqbalance[782]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 21 23:16:37 compute-0 irqbalance[782]: IRQ 27 affinity is now unmanaged
Jan 21 23:16:39 compute-0 sudo[36054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alughqbscxopmeiaepzxecwqgvhrchvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037398.5136147-558-145677743487521/AnsiballZ_mount.py'
Jan 21 23:16:39 compute-0 sudo[36054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:16:39 compute-0 python3.9[36056]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 21 23:16:39 compute-0 sudo[36054]: pam_unix(sudo:session): session closed for user root
Jan 21 23:16:40 compute-0 sudo[36206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbauacvpbmclfjmpqlmcpvsbmwpahaqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037400.6515644-642-141288430586488/AnsiballZ_file.py'
Jan 21 23:16:40 compute-0 sudo[36206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:16:41 compute-0 python3.9[36208]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:16:41 compute-0 sudo[36206]: pam_unix(sudo:session): session closed for user root
Jan 21 23:16:43 compute-0 sudo[36358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osaaberiiafmxevhdmazmozysjvzuqeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037403.3257015-666-77968208204981/AnsiballZ_stat.py'
Jan 21 23:16:43 compute-0 sudo[36358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:16:49 compute-0 python3.9[36360]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:16:49 compute-0 sudo[36358]: pam_unix(sudo:session): session closed for user root
Jan 21 23:16:50 compute-0 sudo[36483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evjjomntgrskcfexqovzhkppbuokvgel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037403.3257015-666-77968208204981/AnsiballZ_copy.py'
Jan 21 23:16:50 compute-0 sudo[36483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:16:50 compute-0 python3.9[36485]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769037403.3257015-666-77968208204981/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e9e57f31efd3627d7bd35fbbf35e3ce75fb9748b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:16:50 compute-0 sudo[36483]: pam_unix(sudo:session): session closed for user root
Jan 21 23:16:51 compute-0 sudo[36635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-camzcjnzzlepjcfhvjpygatvtxalpwmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037411.0490606-738-94316691268908/AnsiballZ_stat.py'
Jan 21 23:16:51 compute-0 sudo[36635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:16:51 compute-0 python3.9[36637]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:16:51 compute-0 sudo[36635]: pam_unix(sudo:session): session closed for user root
Jan 21 23:16:52 compute-0 sudo[36787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnyczibhmscfqgbilsjoulsfoyamylqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037411.849908-762-84032582345471/AnsiballZ_command.py'
Jan 21 23:16:52 compute-0 sudo[36787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:16:52 compute-0 python3.9[36789]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:16:52 compute-0 sudo[36787]: pam_unix(sudo:session): session closed for user root
Jan 21 23:16:53 compute-0 sudo[36940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nleilespbwbhdcgdqtpeuasmlgpiaotk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037412.7573125-786-261275676250138/AnsiballZ_file.py'
Jan 21 23:16:53 compute-0 sudo[36940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:16:53 compute-0 python3.9[36942]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:16:53 compute-0 sudo[36940]: pam_unix(sudo:session): session closed for user root
Jan 21 23:16:54 compute-0 sudo[37092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiohmdnrbyzotkcgegfnhghczhfoawzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037413.8486717-819-38886044305007/AnsiballZ_getent.py'
Jan 21 23:16:54 compute-0 sudo[37092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:16:54 compute-0 python3.9[37094]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 21 23:16:54 compute-0 sudo[37092]: pam_unix(sudo:session): session closed for user root
Jan 21 23:16:54 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 23:16:54 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 23:16:55 compute-0 sudo[37246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqhuzfdzkcqnbtxbihvzfuefembimjzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037414.8613834-843-212192786990137/AnsiballZ_group.py'
Jan 21 23:16:55 compute-0 sudo[37246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:16:55 compute-0 python3.9[37248]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 23:16:55 compute-0 groupadd[37249]: group added to /etc/group: name=qemu, GID=107
Jan 21 23:16:55 compute-0 groupadd[37249]: group added to /etc/gshadow: name=qemu
Jan 21 23:16:55 compute-0 groupadd[37249]: new group: name=qemu, GID=107
Jan 21 23:16:55 compute-0 sudo[37246]: pam_unix(sudo:session): session closed for user root
Jan 21 23:16:56 compute-0 sudo[37404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctedbrgbqexetobpabkrxzhlnvawlsnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037415.901372-867-60408847025480/AnsiballZ_user.py'
Jan 21 23:16:56 compute-0 sudo[37404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:16:56 compute-0 python3.9[37406]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 21 23:16:56 compute-0 useradd[37408]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 21 23:16:56 compute-0 sudo[37404]: pam_unix(sudo:session): session closed for user root
Jan 21 23:16:57 compute-0 sudo[37564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwmaqffhwhhalmvwobtnioosaukhlobb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037417.080292-891-239971478978464/AnsiballZ_getent.py'
Jan 21 23:16:57 compute-0 sudo[37564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:16:57 compute-0 python3.9[37566]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 21 23:16:57 compute-0 sudo[37564]: pam_unix(sudo:session): session closed for user root
Jan 21 23:16:58 compute-0 sudo[37717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmlfkujygyooeqdqnzbqwpfinxvvuapb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037417.9015045-915-148568478406569/AnsiballZ_group.py'
Jan 21 23:16:58 compute-0 sudo[37717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:16:58 compute-0 python3.9[37719]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 23:16:58 compute-0 groupadd[37720]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 21 23:16:58 compute-0 groupadd[37720]: group added to /etc/gshadow: name=hugetlbfs
Jan 21 23:16:58 compute-0 groupadd[37720]: new group: name=hugetlbfs, GID=42477
Jan 21 23:16:58 compute-0 sudo[37717]: pam_unix(sudo:session): session closed for user root
Jan 21 23:16:59 compute-0 sudo[37875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvdgdbptiliatwfblunliqxxlvazhkvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037418.916002-942-172728346664816/AnsiballZ_file.py'
Jan 21 23:16:59 compute-0 sudo[37875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:16:59 compute-0 python3.9[37877]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 21 23:16:59 compute-0 sudo[37875]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:00 compute-0 sudo[38027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyatqwygeejgkgqdhwajctdlibawsgau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037420.0296283-975-18316436343847/AnsiballZ_dnf.py'
Jan 21 23:17:00 compute-0 sudo[38027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:00 compute-0 python3.9[38029]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:17:02 compute-0 sudo[38027]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:03 compute-0 sudo[38180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeacshukdvwtlaehnvmdrkmoffnukrwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037423.3039289-999-253000122301749/AnsiballZ_file.py'
Jan 21 23:17:03 compute-0 sudo[38180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:03 compute-0 python3.9[38182]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:17:03 compute-0 sudo[38180]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:04 compute-0 sudo[38332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnxdtnihizltldieijjdwnhmgdifcowm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037424.1106012-1023-71715945681428/AnsiballZ_stat.py'
Jan 21 23:17:04 compute-0 sudo[38332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:04 compute-0 python3.9[38334]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:17:04 compute-0 sudo[38332]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:05 compute-0 sudo[38455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afoslxnadoasrwqynrrmxyrbkcvgxvar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037424.1106012-1023-71715945681428/AnsiballZ_copy.py'
Jan 21 23:17:05 compute-0 sudo[38455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:05 compute-0 python3.9[38457]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769037424.1106012-1023-71715945681428/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:17:05 compute-0 sudo[38455]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:06 compute-0 sudo[38607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uypeoqkahzkzkpkajenqgdjadzbledkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037425.7634783-1068-212557781460088/AnsiballZ_systemd.py'
Jan 21 23:17:06 compute-0 sudo[38607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:06 compute-0 python3.9[38609]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:17:06 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 21 23:17:06 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 21 23:17:06 compute-0 kernel: Bridge firewalling registered
Jan 21 23:17:06 compute-0 systemd-modules-load[38613]: Inserted module 'br_netfilter'
Jan 21 23:17:06 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 21 23:17:06 compute-0 sudo[38607]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:07 compute-0 sudo[38766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrteioddlaesipvhyjmdrtfawnwqkdfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037427.2396202-1092-9366563888364/AnsiballZ_stat.py'
Jan 21 23:17:07 compute-0 sudo[38766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:07 compute-0 python3.9[38768]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:17:07 compute-0 sudo[38766]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:08 compute-0 sudo[38889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhdihivojdtdgtrvinjgqarrxjadtlep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037427.2396202-1092-9366563888364/AnsiballZ_copy.py'
Jan 21 23:17:08 compute-0 sudo[38889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:08 compute-0 python3.9[38891]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769037427.2396202-1092-9366563888364/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:17:08 compute-0 sudo[38889]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:09 compute-0 sudo[39041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oerfimkzzjzjwqqhzepglvqriqonrmgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037429.0665169-1146-135126046993385/AnsiballZ_dnf.py'
Jan 21 23:17:09 compute-0 sudo[39041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:09 compute-0 python3.9[39043]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:17:13 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 21 23:17:13 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 21 23:17:14 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 23:17:14 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 23:17:14 compute-0 systemd[1]: Reloading.
Jan 21 23:17:14 compute-0 systemd-rc-local-generator[39104]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:17:14 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 23:17:15 compute-0 sudo[39041]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:16 compute-0 python3.9[40715]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:17:16 compute-0 python3.9[41837]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 21 23:17:17 compute-0 python3.9[42633]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:17:18 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 23:17:18 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 23:17:18 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.841s CPU time.
Jan 21 23:17:18 compute-0 systemd[1]: run-rdd566d1457a54fd4ba7013a603b716ec.service: Deactivated successfully.
Jan 21 23:17:18 compute-0 sudo[43209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lecbpqdsoerkhyxuekbnbgnosnlhoqnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037438.0537298-1263-171938185962271/AnsiballZ_command.py'
Jan 21 23:17:18 compute-0 sudo[43209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:18 compute-0 python3.9[43211]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:17:18 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 21 23:17:19 compute-0 systemd[1]: Starting Authorization Manager...
Jan 21 23:17:19 compute-0 polkitd[43428]: Started polkitd version 0.117
Jan 21 23:17:19 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 21 23:17:19 compute-0 polkitd[43428]: Loading rules from directory /etc/polkit-1/rules.d
Jan 21 23:17:19 compute-0 polkitd[43428]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 21 23:17:19 compute-0 polkitd[43428]: Finished loading, compiling and executing 2 rules
Jan 21 23:17:19 compute-0 polkitd[43428]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 21 23:17:19 compute-0 systemd[1]: Started Authorization Manager.
Jan 21 23:17:19 compute-0 sudo[43209]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:20 compute-0 sudo[43596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xonrfvchgxithupclwvqdbmepsaxsfkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037439.7396684-1290-65935512354580/AnsiballZ_systemd.py'
Jan 21 23:17:20 compute-0 sudo[43596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:20 compute-0 python3.9[43598]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:17:20 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 21 23:17:20 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 21 23:17:20 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 21 23:17:20 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 21 23:17:20 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 21 23:17:20 compute-0 sudo[43596]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:21 compute-0 python3.9[43760]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 21 23:17:24 compute-0 sudo[43910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdswifkmkjqbkjeysadzvnngtrzamjya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037444.6138923-1461-110584932168380/AnsiballZ_systemd.py'
Jan 21 23:17:24 compute-0 sudo[43910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:25 compute-0 python3.9[43912]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:17:26 compute-0 systemd[1]: Reloading.
Jan 21 23:17:26 compute-0 systemd-rc-local-generator[43942]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:17:26 compute-0 sudo[43910]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:27 compute-0 sudo[44099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brvbyazpdnjluulllohjgidqikvdqyxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037446.744708-1461-107577191787550/AnsiballZ_systemd.py'
Jan 21 23:17:27 compute-0 sudo[44099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:27 compute-0 python3.9[44101]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:17:27 compute-0 systemd[1]: Reloading.
Jan 21 23:17:27 compute-0 systemd-rc-local-generator[44126]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:17:27 compute-0 sudo[44099]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:28 compute-0 sudo[44288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqzdgzygghnmjqiajtxemfdyddefqehc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037447.978103-1509-203530058082491/AnsiballZ_command.py'
Jan 21 23:17:28 compute-0 sudo[44288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:28 compute-0 python3.9[44290]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:17:28 compute-0 sudo[44288]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:29 compute-0 sudo[44441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iosdqwitlmiklyijawwpgmlreqmplmer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037449.692505-1533-261513102980758/AnsiballZ_command.py'
Jan 21 23:17:29 compute-0 sudo[44441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:30 compute-0 python3.9[44443]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:17:30 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 21 23:17:30 compute-0 sudo[44441]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:30 compute-0 sudo[44594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwbtuoweefckzwijwyjeuxmeipffijef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037450.4089973-1557-128102285263521/AnsiballZ_command.py'
Jan 21 23:17:30 compute-0 sudo[44594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:30 compute-0 python3.9[44596]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:17:32 compute-0 sudo[44594]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:32 compute-0 sudo[44756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkcaalkluefibaohgbystgqrsoraihib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037452.6247482-1581-192319109733175/AnsiballZ_command.py'
Jan 21 23:17:32 compute-0 sudo[44756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:33 compute-0 python3.9[44758]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:17:33 compute-0 sudo[44756]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:33 compute-0 sudo[44909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtfthqphjvkdwhtccgozxmcdxfgfugfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037453.3679202-1605-113712942598796/AnsiballZ_systemd.py'
Jan 21 23:17:33 compute-0 sudo[44909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:34 compute-0 python3.9[44911]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:17:34 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 21 23:17:34 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 21 23:17:34 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 21 23:17:34 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 21 23:17:34 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 21 23:17:34 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 21 23:17:34 compute-0 sudo[44909]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:34 compute-0 sshd-session[31338]: Connection closed by 192.168.122.30 port 48734
Jan 21 23:17:34 compute-0 sshd-session[31334]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:17:34 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 21 23:17:34 compute-0 systemd[1]: session-9.scope: Consumed 1min 47.979s CPU time.
Jan 21 23:17:34 compute-0 systemd-logind[786]: Session 9 logged out. Waiting for processes to exit.
Jan 21 23:17:34 compute-0 systemd-logind[786]: Removed session 9.
Jan 21 23:17:39 compute-0 sshd-session[44941]: Accepted publickey for zuul from 192.168.122.30 port 44338 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:17:39 compute-0 systemd-logind[786]: New session 10 of user zuul.
Jan 21 23:17:39 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 21 23:17:39 compute-0 sshd-session[44941]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:17:40 compute-0 python3.9[45094]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:17:41 compute-0 sudo[45248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbilpkoylesxbgupyysxuvfcbeghcdgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037461.1458929-68-244094936331029/AnsiballZ_getent.py'
Jan 21 23:17:41 compute-0 sudo[45248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:41 compute-0 python3.9[45250]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 21 23:17:41 compute-0 sudo[45248]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:42 compute-0 sudo[45401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqefwbccpoiabiopdxrtsgyhbxqmifjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037462.0216248-92-210203580378333/AnsiballZ_group.py'
Jan 21 23:17:42 compute-0 sudo[45401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:42 compute-0 python3.9[45403]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 23:17:42 compute-0 groupadd[45404]: group added to /etc/group: name=openvswitch, GID=42476
Jan 21 23:17:42 compute-0 groupadd[45404]: group added to /etc/gshadow: name=openvswitch
Jan 21 23:17:42 compute-0 groupadd[45404]: new group: name=openvswitch, GID=42476
Jan 21 23:17:42 compute-0 sudo[45401]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:43 compute-0 sudo[45559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfvauvqeglvtigaikxciywafckgpukpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037462.9253411-116-50383586975519/AnsiballZ_user.py'
Jan 21 23:17:43 compute-0 sudo[45559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:43 compute-0 python3.9[45561]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 21 23:17:43 compute-0 useradd[45563]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 21 23:17:43 compute-0 useradd[45563]: add 'openvswitch' to group 'hugetlbfs'
Jan 21 23:17:43 compute-0 useradd[45563]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 21 23:17:43 compute-0 sudo[45559]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:44 compute-0 sudo[45719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkpnxiatgdsaubkbbiwgmopiquyryovi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037464.2370224-146-127415636119750/AnsiballZ_setup.py'
Jan 21 23:17:44 compute-0 sudo[45719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:44 compute-0 python3.9[45721]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:17:45 compute-0 sudo[45719]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:45 compute-0 sudo[45803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaizjszaelpdhmobwsqwkzouwtdqmbdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037464.2370224-146-127415636119750/AnsiballZ_dnf.py'
Jan 21 23:17:45 compute-0 sudo[45803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:45 compute-0 python3.9[45805]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 21 23:17:48 compute-0 sudo[45803]: pam_unix(sudo:session): session closed for user root
Jan 21 23:17:48 compute-0 sudo[45968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsdkrevbclhwvajwjhzyfzovoqzbcnsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037468.6829855-188-228368755174421/AnsiballZ_dnf.py'
Jan 21 23:17:48 compute-0 sudo[45968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:17:49 compute-0 python3.9[45970]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:17:59 compute-0 kernel: SELinux:  Converting 2736 SID table entries...
Jan 21 23:18:00 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 23:18:00 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 21 23:18:00 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 23:18:00 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 21 23:18:00 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 23:18:00 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 23:18:00 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 23:18:00 compute-0 groupadd[45993]: group added to /etc/group: name=unbound, GID=994
Jan 21 23:18:00 compute-0 groupadd[45993]: group added to /etc/gshadow: name=unbound
Jan 21 23:18:00 compute-0 groupadd[45993]: new group: name=unbound, GID=994
Jan 21 23:18:00 compute-0 useradd[46000]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 21 23:18:00 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 21 23:18:00 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 21 23:18:01 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 23:18:01 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 23:18:01 compute-0 systemd[1]: Reloading.
Jan 21 23:18:01 compute-0 systemd-rc-local-generator[46499]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:18:01 compute-0 systemd-sysv-generator[46504]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:18:02 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 23:18:02 compute-0 sudo[45968]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:02 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 23:18:02 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 23:18:02 compute-0 systemd[1]: run-rdb3f9987159e49c9b1a662563babba54.service: Deactivated successfully.
Jan 21 23:18:04 compute-0 sudo[47066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxwdelnrourqswjnkgwcmycqymwmwhev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037484.0181692-212-31982401449634/AnsiballZ_systemd.py'
Jan 21 23:18:04 compute-0 sudo[47066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:05 compute-0 python3.9[47068]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 23:18:05 compute-0 systemd[1]: Reloading.
Jan 21 23:18:05 compute-0 systemd-sysv-generator[47097]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:18:05 compute-0 systemd-rc-local-generator[47092]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:18:05 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 21 23:18:05 compute-0 chown[47110]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 21 23:18:05 compute-0 ovs-ctl[47115]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 21 23:18:05 compute-0 ovs-ctl[47115]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 21 23:18:05 compute-0 ovs-ctl[47115]: Starting ovsdb-server [  OK  ]
Jan 21 23:18:05 compute-0 ovs-vsctl[47164]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 21 23:18:05 compute-0 ovs-vsctl[47184]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"c2a76040-4536-46ac-93c9-20aa76f22ff4\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 21 23:18:05 compute-0 ovs-ctl[47115]: Configuring Open vSwitch system IDs [  OK  ]
Jan 21 23:18:05 compute-0 ovs-vsctl[47190]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 21 23:18:05 compute-0 ovs-ctl[47115]: Enabling remote OVSDB managers [  OK  ]
Jan 21 23:18:05 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 21 23:18:05 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 21 23:18:05 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 21 23:18:05 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 21 23:18:05 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 21 23:18:05 compute-0 ovs-ctl[47234]: Inserting openvswitch module [  OK  ]
Jan 21 23:18:06 compute-0 ovs-ctl[47203]: Starting ovs-vswitchd [  OK  ]
Jan 21 23:18:06 compute-0 ovs-vsctl[47252]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 21 23:18:06 compute-0 ovs-ctl[47203]: Enabling remote OVSDB managers [  OK  ]
Jan 21 23:18:06 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 21 23:18:06 compute-0 systemd[1]: Starting Open vSwitch...
Jan 21 23:18:06 compute-0 systemd[1]: Finished Open vSwitch.
Jan 21 23:18:06 compute-0 sudo[47066]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:08 compute-0 python3.9[47404]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:18:08 compute-0 sudo[47554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxdxkiehagvmjfixqrghxblnpzdswfuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037488.2974284-266-266162956614636/AnsiballZ_sefcontext.py'
Jan 21 23:18:08 compute-0 sudo[47554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:09 compute-0 python3.9[47556]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 21 23:18:10 compute-0 kernel: SELinux:  Converting 2750 SID table entries...
Jan 21 23:18:10 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 23:18:10 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 21 23:18:10 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 23:18:10 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 21 23:18:10 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 23:18:10 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 23:18:10 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 23:18:10 compute-0 sudo[47554]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:11 compute-0 python3.9[47711]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:18:12 compute-0 sudo[47867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njousigmfkgyrffqsyrqodekvddzlthf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037492.0201454-320-32182514898950/AnsiballZ_dnf.py'
Jan 21 23:18:12 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 21 23:18:12 compute-0 sudo[47867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:12 compute-0 python3.9[47869]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:18:13 compute-0 sudo[47867]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:14 compute-0 sudo[48020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuorvkmoevumfnsniyqfyrvmwmmaqmox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037494.0302734-344-186029225203244/AnsiballZ_command.py'
Jan 21 23:18:14 compute-0 sudo[48020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:14 compute-0 python3.9[48022]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:18:15 compute-0 sudo[48020]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:16 compute-0 sudo[48307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvbscelafknwjrtvrrfwtubxegyudldn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037495.9446628-368-25513775366152/AnsiballZ_file.py'
Jan 21 23:18:16 compute-0 sudo[48307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:16 compute-0 python3.9[48309]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 21 23:18:16 compute-0 sudo[48307]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:17 compute-0 python3.9[48459]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:18:18 compute-0 sudo[48611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrwhhrqjoyizdfkuqxutmykefmepmjiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037497.810215-416-234550209814406/AnsiballZ_dnf.py'
Jan 21 23:18:18 compute-0 sudo[48611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:18 compute-0 python3.9[48613]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:18:20 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 23:18:20 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 23:18:20 compute-0 systemd[1]: Reloading.
Jan 21 23:18:20 compute-0 systemd-rc-local-generator[48654]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:18:20 compute-0 systemd-sysv-generator[48657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:18:20 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 23:18:20 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 23:18:20 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 23:18:20 compute-0 systemd[1]: run-r0d55258bb9614a98b2e1cc275d92ca98.service: Deactivated successfully.
Jan 21 23:18:20 compute-0 sudo[48611]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:21 compute-0 sudo[48928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieerbrjhtkeoodvpmpnhiyvepzdjyuyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037501.2123861-440-18984742802874/AnsiballZ_systemd.py'
Jan 21 23:18:21 compute-0 sudo[48928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:21 compute-0 python3.9[48930]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:18:21 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 21 23:18:21 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 21 23:18:21 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 21 23:18:21 compute-0 systemd[1]: Stopping Network Manager...
Jan 21 23:18:21 compute-0 NetworkManager[7194]: <info>  [1769037501.9149] caught SIGTERM, shutting down normally.
Jan 21 23:18:21 compute-0 NetworkManager[7194]: <info>  [1769037501.9168] dhcp4 (eth0): canceled DHCP transaction
Jan 21 23:18:21 compute-0 NetworkManager[7194]: <info>  [1769037501.9168] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 23:18:21 compute-0 NetworkManager[7194]: <info>  [1769037501.9169] dhcp4 (eth0): state changed no lease
Jan 21 23:18:21 compute-0 NetworkManager[7194]: <info>  [1769037501.9172] manager: NetworkManager state is now CONNECTED_SITE
Jan 21 23:18:21 compute-0 NetworkManager[7194]: <info>  [1769037501.9254] exiting (success)
Jan 21 23:18:21 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 23:18:21 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 23:18:21 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 21 23:18:21 compute-0 systemd[1]: Stopped Network Manager.
Jan 21 23:18:21 compute-0 systemd[1]: NetworkManager.service: Consumed 12.812s CPU time, 4.1M memory peak, read 0B from disk, written 28.0K to disk.
Jan 21 23:18:21 compute-0 systemd[1]: Starting Network Manager...
Jan 21 23:18:21 compute-0 NetworkManager[48940]: <info>  [1769037501.9908] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:52b6d350-1eb8-4a17-b2d9-800512411866)
Jan 21 23:18:21 compute-0 NetworkManager[48940]: <info>  [1769037501.9910] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 21 23:18:21 compute-0 NetworkManager[48940]: <info>  [1769037501.9978] manager[0x5601f2852000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 21 23:18:22 compute-0 systemd[1]: Starting Hostname Service...
Jan 21 23:18:22 compute-0 systemd[1]: Started Hostname Service.
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.0908] hostname: hostname: using hostnamed
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.0909] hostname: static hostname changed from (none) to "compute-0"
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.0914] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.0919] manager[0x5601f2852000]: rfkill: Wi-Fi hardware radio set enabled
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.0920] manager[0x5601f2852000]: rfkill: WWAN hardware radio set enabled
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.0944] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.0954] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.0954] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.0955] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.0955] manager: Networking is enabled by state file
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.0958] settings: Loaded settings plugin: keyfile (internal)
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.0962] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.0994] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1005] dhcp: init: Using DHCP client 'internal'
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1009] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1016] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1021] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1030] device (lo): Activation: starting connection 'lo' (b77a1b8c-e360-4dc5-8be9-c999c9100350)
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1038] device (eth0): carrier: link connected
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1042] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1049] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1049] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1057] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1064] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1071] device (eth1): carrier: link connected
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1076] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1082] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (33de29ae-c5cf-5966-ab7d-58d01d107e18) (indicated)
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1082] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1087] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1094] device (eth1): Activation: starting connection 'ci-private-network' (33de29ae-c5cf-5966-ab7d-58d01d107e18)
Jan 21 23:18:22 compute-0 systemd[1]: Started Network Manager.
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1102] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1109] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1112] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1114] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1116] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1119] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1122] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1125] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1128] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1136] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1140] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1150] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1166] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1175] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1178] dhcp4 (eth0): state changed new lease, address=38.102.83.227
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1182] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1189] device (lo): Activation: successful, device activated.
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1199] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 21 23:18:22 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1267] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1272] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1279] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1282] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1285] device (eth1): Activation: successful, device activated.
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1293] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1295] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1299] manager: NetworkManager state is now CONNECTED_SITE
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1302] device (eth0): Activation: successful, device activated.
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1309] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 21 23:18:22 compute-0 NetworkManager[48940]: <info>  [1769037502.1337] manager: startup complete
Jan 21 23:18:22 compute-0 sudo[48928]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:22 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 21 23:18:22 compute-0 sudo[49154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuubsglpriwssoemdjxfbjxsnbdoqljf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037502.4255352-464-53339538332638/AnsiballZ_dnf.py'
Jan 21 23:18:22 compute-0 sudo[49154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:22 compute-0 python3.9[49156]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:18:28 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 23:18:28 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 23:18:28 compute-0 systemd[1]: Reloading.
Jan 21 23:18:28 compute-0 systemd-rc-local-generator[49211]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:18:28 compute-0 systemd-sysv-generator[49214]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:18:28 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 23:18:29 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 23:18:29 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 23:18:29 compute-0 systemd[1]: run-r1e9f005d49ed449bbdc2e1eccc9cc648.service: Deactivated successfully.
Jan 21 23:18:29 compute-0 sudo[49154]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:31 compute-0 sudo[49614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkoejxzocptqvwycfzkmvtxylbythubv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037511.4563391-500-178590857855600/AnsiballZ_stat.py'
Jan 21 23:18:31 compute-0 sudo[49614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:32 compute-0 python3.9[49616]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:18:32 compute-0 sudo[49614]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:32 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 23:18:32 compute-0 sudo[49766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sttxocpenhqlngzpcgjtqsdbxmdaimsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037512.3058758-527-122173267356422/AnsiballZ_ini_file.py'
Jan 21 23:18:32 compute-0 sudo[49766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:32 compute-0 python3.9[49768]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:18:33 compute-0 sudo[49766]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:34 compute-0 sudo[49920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kukecepeafmueqlpbadalicvmoxwgddz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037514.3336005-557-122461987229687/AnsiballZ_ini_file.py'
Jan 21 23:18:34 compute-0 sudo[49920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:34 compute-0 python3.9[49922]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:18:34 compute-0 sudo[49920]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:35 compute-0 sudo[50072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eodicboccmaxxklmchkxnnfjdvnykxdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037515.0722957-557-262548313861340/AnsiballZ_ini_file.py'
Jan 21 23:18:35 compute-0 sudo[50072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:35 compute-0 python3.9[50074]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:18:35 compute-0 sudo[50072]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:36 compute-0 sudo[50224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhcrigtyxetmgbgvnzfeogkqtklndizi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037515.9278414-602-135456245504657/AnsiballZ_ini_file.py'
Jan 21 23:18:36 compute-0 sudo[50224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:36 compute-0 python3.9[50226]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:18:36 compute-0 sudo[50224]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:36 compute-0 sudo[50376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxbkphxyzrnhcdsavxvuapjnsjftdqax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037516.576281-602-209550119492796/AnsiballZ_ini_file.py'
Jan 21 23:18:36 compute-0 sudo[50376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:37 compute-0 python3.9[50378]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:18:37 compute-0 sudo[50376]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:37 compute-0 sudo[50528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egatpujsoohnacvrkcbyrmacxbyaugwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037517.4743638-647-69547052687156/AnsiballZ_stat.py'
Jan 21 23:18:37 compute-0 sudo[50528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:38 compute-0 python3.9[50530]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:18:38 compute-0 sudo[50528]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:38 compute-0 sudo[50651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bslfjakvzyqlxdssyjcwxiomixiekyty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037517.4743638-647-69547052687156/AnsiballZ_copy.py'
Jan 21 23:18:38 compute-0 sudo[50651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:38 compute-0 python3.9[50653]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769037517.4743638-647-69547052687156/.source _original_basename=.pmm670bz follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:18:38 compute-0 sudo[50651]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:39 compute-0 sudo[50803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjsewplljijhgpsahyndeobgkqlzwyzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037519.0226367-692-113525399257333/AnsiballZ_file.py'
Jan 21 23:18:39 compute-0 sudo[50803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:39 compute-0 python3.9[50805]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:18:39 compute-0 sudo[50803]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:40 compute-0 sudo[50955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdzterwwmxrlmovbqtlhqgidxhzahviw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037519.819441-716-104804074315363/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 21 23:18:40 compute-0 sudo[50955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:40 compute-0 python3.9[50957]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 21 23:18:40 compute-0 sudo[50955]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:41 compute-0 sudo[51107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olyfzawecizgmymvzhzuevtlbfwvetjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037520.779107-743-268448634049312/AnsiballZ_file.py'
Jan 21 23:18:41 compute-0 sudo[51107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:41 compute-0 python3.9[51109]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:18:41 compute-0 sudo[51107]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:42 compute-0 sudo[51259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxwvvzmuhkbfosyxajreixwowzdtaafy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037521.737739-773-160997961203738/AnsiballZ_stat.py'
Jan 21 23:18:42 compute-0 sudo[51259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:42 compute-0 sudo[51259]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:42 compute-0 sudo[51382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kowalspxxudzipuqtyeclnarsejudiax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037521.737739-773-160997961203738/AnsiballZ_copy.py'
Jan 21 23:18:42 compute-0 sudo[51382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:42 compute-0 sudo[51382]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:43 compute-0 sudo[51534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkximphwfxmxwyoopmmekdpehnklrbfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037523.2238014-818-220477836973464/AnsiballZ_slurp.py'
Jan 21 23:18:43 compute-0 sudo[51534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:43 compute-0 python3.9[51536]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 21 23:18:43 compute-0 sudo[51534]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:45 compute-0 sudo[51709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaunvqducwcoyqkkhilpwpmzahaaysgk ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037524.33547-845-137010782476292/async_wrapper.py j266540613438 300 /home/zuul/.ansible/tmp/ansible-tmp-1769037524.33547-845-137010782476292/AnsiballZ_edpm_os_net_config.py _'
Jan 21 23:18:45 compute-0 sudo[51709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:45 compute-0 ansible-async_wrapper.py[51711]: Invoked with j266540613438 300 /home/zuul/.ansible/tmp/ansible-tmp-1769037524.33547-845-137010782476292/AnsiballZ_edpm_os_net_config.py _
Jan 21 23:18:45 compute-0 ansible-async_wrapper.py[51714]: Starting module and watcher
Jan 21 23:18:45 compute-0 ansible-async_wrapper.py[51714]: Start watching 51715 (300)
Jan 21 23:18:45 compute-0 ansible-async_wrapper.py[51715]: Start module (51715)
Jan 21 23:18:45 compute-0 ansible-async_wrapper.py[51711]: Return async_wrapper task started.
Jan 21 23:18:45 compute-0 sudo[51709]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:46 compute-0 python3.9[51716]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 21 23:18:46 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 21 23:18:46 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 21 23:18:46 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 21 23:18:46 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 21 23:18:46 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.8777] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51717 uid=0 result="success"
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.8794] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51717 uid=0 result="success"
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.9234] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.9237] audit: op="connection-add" uuid="0dde9607-d12e-4b9b-a7ee-c08a8b394136" name="br-ex-br" pid=51717 uid=0 result="success"
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.9251] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.9253] audit: op="connection-add" uuid="5a5318bb-44ee-4920-857a-43c30018cad1" name="br-ex-port" pid=51717 uid=0 result="success"
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.9266] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.9268] audit: op="connection-add" uuid="a64a5d28-7e54-4af2-9d35-0886f1c66deb" name="eth1-port" pid=51717 uid=0 result="success"
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.9280] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.9283] audit: op="connection-add" uuid="94dc9e08-71e1-4fd5-870d-93a0a0f90482" name="vlan20-port" pid=51717 uid=0 result="success"
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.9295] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.9297] audit: op="connection-add" uuid="03aa8cc2-4716-4773-a5fe-3aca1c60e1c8" name="vlan21-port" pid=51717 uid=0 result="success"
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.9309] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.9311] audit: op="connection-add" uuid="d8617599-e362-429b-adfc-e6f6a7da3ac5" name="vlan22-port" pid=51717 uid=0 result="success"
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.9323] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.9325] audit: op="connection-add" uuid="85889c4a-e856-477c-b0d0-f8fb7724957d" name="vlan23-port" pid=51717 uid=0 result="success"
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.9346] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout,connection.autoconnect-priority,connection.timestamp" pid=51717 uid=0 result="success"
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.9364] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 21 23:18:47 compute-0 NetworkManager[48940]: <info>  [1769037527.9367] audit: op="connection-add" uuid="2d15a7c8-64f3-47c8-a253-ee446401e435" name="br-ex-if" pid=51717 uid=0 result="success"
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0119] audit: op="connection-update" uuid="33de29ae-c5cf-5966-ab7d-58d01d107e18" name="ci-private-network" args="ovs-external-ids.data,ipv6.addresses,ipv6.routing-rules,ipv6.method,ipv6.addr-gen-mode,ipv6.dns,ipv6.routes,ipv4.addresses,ipv4.routing-rules,ipv4.method,ipv4.never-default,ipv4.dns,ipv4.routes,ovs-interface.type,connection.controller,connection.port-type,connection.slave-type,connection.timestamp,connection.master" pid=51717 uid=0 result="success"
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0147] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0150] audit: op="connection-add" uuid="af316029-ac9f-4abf-b3f0-b6214477586a" name="vlan20-if" pid=51717 uid=0 result="success"
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0176] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0179] audit: op="connection-add" uuid="4d0a98ec-5175-44bb-9964-01a98622b928" name="vlan21-if" pid=51717 uid=0 result="success"
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0204] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0208] audit: op="connection-add" uuid="0a7713f6-dc25-462a-81b9-9923b041f41a" name="vlan22-if" pid=51717 uid=0 result="success"
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0236] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0239] audit: op="connection-add" uuid="29b3c179-1d62-43e4-8789-133d90bee41a" name="vlan23-if" pid=51717 uid=0 result="success"
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0255] audit: op="connection-delete" uuid="8b2b191b-f4f0-3a8f-bed2-162c0f2abdba" name="Wired connection 1" pid=51717 uid=0 result="success"
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0277] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <warn>  [1769037528.0281] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0295] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0302] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (0dde9607-d12e-4b9b-a7ee-c08a8b394136)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0304] audit: op="connection-activate" uuid="0dde9607-d12e-4b9b-a7ee-c08a8b394136" name="br-ex-br" pid=51717 uid=0 result="success"
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0308] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <warn>  [1769037528.0311] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0321] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0328] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (5a5318bb-44ee-4920-857a-43c30018cad1)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0332] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <warn>  [1769037528.0335] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0343] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0351] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (a64a5d28-7e54-4af2-9d35-0886f1c66deb)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0354] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <warn>  [1769037528.0358] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0366] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0373] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (94dc9e08-71e1-4fd5-870d-93a0a0f90482)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0378] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <warn>  [1769037528.0380] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0389] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0396] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (03aa8cc2-4716-4773-a5fe-3aca1c60e1c8)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0401] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <warn>  [1769037528.0403] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0412] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0420] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (d8617599-e362-429b-adfc-e6f6a7da3ac5)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0424] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <warn>  [1769037528.0427] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0436] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0444] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (85889c4a-e856-477c-b0d0-f8fb7724957d)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0447] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0452] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0455] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0469] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <warn>  [1769037528.0471] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0477] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0485] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (2d15a7c8-64f3-47c8-a253-ee446401e435)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0487] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0494] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0498] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0500] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0502] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0522] device (eth1): disconnecting for new activation request.
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0523] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0529] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0533] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0536] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0542] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <warn>  [1769037528.0544] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0552] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0562] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (af316029-ac9f-4abf-b3f0-b6214477586a)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0563] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0570] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0573] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0576] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0582] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <warn>  [1769037528.0584] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0591] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0599] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (4d0a98ec-5175-44bb-9964-01a98622b928)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0600] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0606] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0609] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0611] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0617] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <warn>  [1769037528.0619] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0624] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0632] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (0a7713f6-dc25-462a-81b9-9923b041f41a)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0634] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0639] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0642] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0646] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0651] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <warn>  [1769037528.0652] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0659] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0667] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (29b3c179-1d62-43e4-8789-133d90bee41a)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0668] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0673] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0676] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0679] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0681] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0705] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,connection.autoconnect-priority" pid=51717 uid=0 result="success"
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0709] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0715] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0718] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0730] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0736] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0744] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0763] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0768] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0778] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0786] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0792] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 kernel: Timeout policy base is empty
Jan 21 23:18:48 compute-0 systemd-udevd[51721]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0797] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0805] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0812] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0835] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0841] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0850] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0859] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0866] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0871] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0881] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0890] dhcp4 (eth0): canceled DHCP transaction
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0891] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0892] dhcp4 (eth0): state changed no lease
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0896] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0942] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0952] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51717 uid=0 result="fail" reason="Device is not activated"
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0966] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.0973] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1045] device (eth1): Activation: starting connection 'ci-private-network' (33de29ae-c5cf-5966-ab7d-58d01d107e18)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1052] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1054] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1056] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1058] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1060] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1062] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1064] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1068] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1074] dhcp4 (eth0): state changed new lease, address=38.102.83.227
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1082] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1090] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1095] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1103] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1109] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1117] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1122] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1128] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1134] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1161] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1167] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1172] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1177] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1182] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1188] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1193] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.1198] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 23:18:48 compute-0 kernel: br-ex: entered promiscuous mode
Jan 21 23:18:48 compute-0 kernel: vlan22: entered promiscuous mode
Jan 21 23:18:48 compute-0 kernel: vlan20: entered promiscuous mode
Jan 21 23:18:48 compute-0 systemd-udevd[51723]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 23:18:48 compute-0 kernel: vlan21: entered promiscuous mode
Jan 21 23:18:48 compute-0 kernel: vlan23: entered promiscuous mode
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2008] device (eth1): state change: config -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2011] device (eth1): released from controller device eth1
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2023] device (eth1): disconnecting for new activation request.
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2024] audit: op="connection-activate" uuid="33de29ae-c5cf-5966-ab7d-58d01d107e18" name="ci-private-network" pid=51717 uid=0 result="success"
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2033] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2064] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2077] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2088] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2101] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2113] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2116] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2128] device (eth1): Activation: starting connection 'ci-private-network' (33de29ae-c5cf-5966-ab7d-58d01d107e18)
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2162] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2169] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2178] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51717 uid=0 result="success"
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2204] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2221] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2256] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2261] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2271] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2280] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2290] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2307] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2313] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2319] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2323] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2329] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2330] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2332] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2333] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2334] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2339] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2344] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2348] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2353] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2357] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2363] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2368] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2374] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2376] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 23:18:48 compute-0 NetworkManager[48940]: <info>  [1769037528.2380] device (eth1): Activation: successful, device activated.
Jan 21 23:18:49 compute-0 sudo[52077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jizxazduglcsglhwvvdqojwmdyaephgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037528.601537-845-95922960763285/AnsiballZ_async_status.py'
Jan 21 23:18:49 compute-0 sudo[52077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:49 compute-0 python3.9[52079]: ansible-ansible.legacy.async_status Invoked with jid=j266540613438.51711 mode=status _async_dir=/root/.ansible_async
Jan 21 23:18:49 compute-0 sudo[52077]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:49 compute-0 NetworkManager[48940]: <info>  [1769037529.3706] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51717 uid=0 result="success"
Jan 21 23:18:49 compute-0 NetworkManager[48940]: <info>  [1769037529.5980] checkpoint[0x5601f2827950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 21 23:18:49 compute-0 NetworkManager[48940]: <info>  [1769037529.5984] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51717 uid=0 result="success"
Jan 21 23:18:49 compute-0 NetworkManager[48940]: <info>  [1769037529.9265] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51717 uid=0 result="success"
Jan 21 23:18:49 compute-0 NetworkManager[48940]: <info>  [1769037529.9277] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51717 uid=0 result="success"
Jan 21 23:18:50 compute-0 ansible-async_wrapper.py[51714]: 51715 still running (300)
Jan 21 23:18:50 compute-0 NetworkManager[48940]: <info>  [1769037530.5083] audit: op="networking-control" arg="global-dns-configuration" pid=51717 uid=0 result="success"
Jan 21 23:18:50 compute-0 NetworkManager[48940]: <info>  [1769037530.5974] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 21 23:18:50 compute-0 NetworkManager[48940]: <info>  [1769037530.8581] audit: op="networking-control" arg="global-dns-configuration" pid=51717 uid=0 result="success"
Jan 21 23:18:50 compute-0 NetworkManager[48940]: <info>  [1769037530.8608] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51717 uid=0 result="success"
Jan 21 23:18:51 compute-0 NetworkManager[48940]: <info>  [1769037531.1033] checkpoint[0x5601f2827a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 21 23:18:51 compute-0 NetworkManager[48940]: <info>  [1769037531.1043] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51717 uid=0 result="success"
Jan 21 23:18:51 compute-0 ansible-async_wrapper.py[51715]: Module complete (51715)
Jan 21 23:18:52 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 21 23:18:52 compute-0 sudo[52185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfxmxjizyhpmoowchueeansccaeczqjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037528.601537-845-95922960763285/AnsiballZ_async_status.py'
Jan 21 23:18:52 compute-0 sudo[52185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:52 compute-0 python3.9[52187]: ansible-ansible.legacy.async_status Invoked with jid=j266540613438.51711 mode=status _async_dir=/root/.ansible_async
Jan 21 23:18:52 compute-0 sudo[52185]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:53 compute-0 sudo[52284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weehqqxteqmsqqxkwbfizzkfpoxeayyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037528.601537-845-95922960763285/AnsiballZ_async_status.py'
Jan 21 23:18:53 compute-0 sudo[52284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:53 compute-0 python3.9[52286]: ansible-ansible.legacy.async_status Invoked with jid=j266540613438.51711 mode=cleanup _async_dir=/root/.ansible_async
Jan 21 23:18:53 compute-0 sudo[52284]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:54 compute-0 sudo[52437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfruqnsfbpbztuxgucxyvmsqmenmbojj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037533.6725464-926-79981725609783/AnsiballZ_stat.py'
Jan 21 23:18:54 compute-0 sudo[52437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:54 compute-0 python3.9[52439]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:18:54 compute-0 sudo[52437]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:54 compute-0 sudo[52560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynorokzhehzwvzpzumsobvywfvxajfzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037533.6725464-926-79981725609783/AnsiballZ_copy.py'
Jan 21 23:18:54 compute-0 sudo[52560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:54 compute-0 python3.9[52562]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769037533.6725464-926-79981725609783/.source.returncode _original_basename=.3l1r1q2d follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:18:54 compute-0 sudo[52560]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:55 compute-0 ansible-async_wrapper.py[51714]: Done in kid B.
Jan 21 23:18:55 compute-0 sudo[52712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmucqjphpoqizwnpdowvisabgrdqytmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037535.2553816-974-19266583634701/AnsiballZ_stat.py'
Jan 21 23:18:55 compute-0 sudo[52712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:55 compute-0 python3.9[52714]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:18:55 compute-0 sudo[52712]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:56 compute-0 sudo[52835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hphuhetlwqfpmahlgbagtkugtqdrpoiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037535.2553816-974-19266583634701/AnsiballZ_copy.py'
Jan 21 23:18:56 compute-0 sudo[52835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:56 compute-0 python3.9[52837]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769037535.2553816-974-19266583634701/.source.cfg _original_basename=.v8ghwvwq follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:18:56 compute-0 sudo[52835]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:56 compute-0 sudo[52988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzoermhbrclxxzfdhvwttzrmluzkglnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037536.6037507-1019-128119222595053/AnsiballZ_systemd.py'
Jan 21 23:18:56 compute-0 sudo[52988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:18:57 compute-0 python3.9[52990]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:18:57 compute-0 systemd[1]: Reloading Network Manager...
Jan 21 23:18:57 compute-0 NetworkManager[48940]: <info>  [1769037537.3465] audit: op="reload" arg="0" pid=52994 uid=0 result="success"
Jan 21 23:18:57 compute-0 NetworkManager[48940]: <info>  [1769037537.3476] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 21 23:18:57 compute-0 systemd[1]: Reloaded Network Manager.
Jan 21 23:18:57 compute-0 sudo[52988]: pam_unix(sudo:session): session closed for user root
Jan 21 23:18:57 compute-0 sshd-session[44944]: Connection closed by 192.168.122.30 port 44338
Jan 21 23:18:57 compute-0 sshd-session[44941]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:18:57 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 21 23:18:57 compute-0 systemd[1]: session-10.scope: Consumed 51.971s CPU time.
Jan 21 23:18:57 compute-0 systemd-logind[786]: Session 10 logged out. Waiting for processes to exit.
Jan 21 23:18:57 compute-0 systemd-logind[786]: Removed session 10.
Jan 21 23:19:03 compute-0 sshd-session[53025]: Accepted publickey for zuul from 192.168.122.30 port 50230 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:19:03 compute-0 systemd-logind[786]: New session 11 of user zuul.
Jan 21 23:19:03 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 21 23:19:03 compute-0 sshd-session[53025]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:19:04 compute-0 python3.9[53178]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:19:05 compute-0 python3.9[53332]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:19:07 compute-0 python3.9[53526]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:19:07 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 23:19:07 compute-0 sshd-session[53028]: Connection closed by 192.168.122.30 port 50230
Jan 21 23:19:07 compute-0 sshd-session[53025]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:19:07 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 21 23:19:07 compute-0 systemd[1]: session-11.scope: Consumed 2.654s CPU time.
Jan 21 23:19:07 compute-0 systemd-logind[786]: Session 11 logged out. Waiting for processes to exit.
Jan 21 23:19:07 compute-0 systemd-logind[786]: Removed session 11.
Jan 21 23:19:13 compute-0 sshd-session[53556]: Accepted publickey for zuul from 192.168.122.30 port 53912 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:19:13 compute-0 systemd-logind[786]: New session 12 of user zuul.
Jan 21 23:19:13 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 21 23:19:13 compute-0 sshd-session[53556]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:19:14 compute-0 python3.9[53709]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:19:15 compute-0 python3.9[53863]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:19:16 compute-0 sudo[54018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stjlgnyotwqplqpbdsyifcudguqqazlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037556.0439339-80-484857707582/AnsiballZ_setup.py'
Jan 21 23:19:16 compute-0 sudo[54018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:16 compute-0 python3.9[54020]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:19:17 compute-0 sudo[54018]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:17 compute-0 sudo[54102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbrpwqtupwsbdhigdmfllrwiwsclehkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037556.0439339-80-484857707582/AnsiballZ_dnf.py'
Jan 21 23:19:17 compute-0 sudo[54102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:17 compute-0 python3.9[54104]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:19:18 compute-0 sudo[54102]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:19 compute-0 sudo[54256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgmapqbxyglmnvxdgnjopssfvwjcsywv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037559.4915888-116-77186668675313/AnsiballZ_setup.py'
Jan 21 23:19:19 compute-0 sudo[54256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:20 compute-0 python3.9[54258]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:19:20 compute-0 sudo[54256]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:21 compute-0 sudo[54451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpgyrmlpzhwidaxapnwssjnewmppmfjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037560.8429315-149-69792541236454/AnsiballZ_file.py'
Jan 21 23:19:21 compute-0 sudo[54451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:21 compute-0 python3.9[54453]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:19:21 compute-0 sudo[54451]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:22 compute-0 sudo[54603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qydwpllnygmaqtpggkhzcwqovpcgqdul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037561.7524514-173-55851811229288/AnsiballZ_command.py'
Jan 21 23:19:22 compute-0 sudo[54603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:22 compute-0 python3.9[54605]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:19:22 compute-0 podman[54606]: 2026-01-21 23:19:22.520181903 +0000 UTC m=+0.068726225 system refresh
Jan 21 23:19:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 23:19:22 compute-0 sudo[54603]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:23 compute-0 sudo[54764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsskmxjidzmwetambeqgzgrbydkzmogy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037562.8135724-197-194305705616404/AnsiballZ_stat.py'
Jan 21 23:19:23 compute-0 sudo[54764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:23 compute-0 python3.9[54766]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:19:23 compute-0 sudo[54764]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:24 compute-0 sudo[54887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghtlqgilsxfhiaoqslwmxonrwpndmcfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037562.8135724-197-194305705616404/AnsiballZ_copy.py'
Jan 21 23:19:24 compute-0 sudo[54887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:24 compute-0 python3.9[54889]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769037562.8135724-197-194305705616404/.source.json follow=False _original_basename=podman_network_config.j2 checksum=39a82349adf0c4d8a7fdd691ed110ddf836f1934 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:19:24 compute-0 sudo[54887]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:24 compute-0 sudo[55039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhnuauixcqombstiyvrsarxeyctjnhcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037564.5165157-242-72085866165821/AnsiballZ_stat.py'
Jan 21 23:19:24 compute-0 sudo[55039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:25 compute-0 python3.9[55041]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:19:25 compute-0 sudo[55039]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:25 compute-0 sudo[55162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pismmrnphpuolrvivmqidmhfatrrjlud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037564.5165157-242-72085866165821/AnsiballZ_copy.py'
Jan 21 23:19:25 compute-0 sudo[55162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:25 compute-0 python3.9[55164]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769037564.5165157-242-72085866165821/.source.conf follow=False _original_basename=registries.conf.j2 checksum=51f7dfe021bf6a784cb4010cf142a3df219fb1a0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:19:25 compute-0 sudo[55162]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:26 compute-0 sudo[55314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljworozybgewnqcoloephxumjghbpoiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037566.0880728-290-121115455381403/AnsiballZ_ini_file.py'
Jan 21 23:19:26 compute-0 sudo[55314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:26 compute-0 python3.9[55316]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:19:26 compute-0 sudo[55314]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:27 compute-0 sudo[55466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yetqlblnrwvxihwjpaskwekptqyshkqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037566.9614043-290-65282223871427/AnsiballZ_ini_file.py'
Jan 21 23:19:27 compute-0 sudo[55466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:27 compute-0 python3.9[55468]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:19:27 compute-0 sudo[55466]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:28 compute-0 sudo[55618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbjwuchmlpvdnqmpxmnrbadvcjfriuct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037567.706531-290-32128632125710/AnsiballZ_ini_file.py'
Jan 21 23:19:28 compute-0 sudo[55618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:28 compute-0 python3.9[55620]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:19:28 compute-0 sudo[55618]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:28 compute-0 sudo[55770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jckangthjokvfkkyfplxbxdxlpdcdpgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037568.5080664-290-192278551229987/AnsiballZ_ini_file.py'
Jan 21 23:19:28 compute-0 sudo[55770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:28 compute-0 python3.9[55772]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:19:29 compute-0 sudo[55770]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:29 compute-0 sudo[55922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnmycnkikuhvyyvvgmnennuuiqdrnyuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037569.4373276-383-53866990143578/AnsiballZ_dnf.py'
Jan 21 23:19:29 compute-0 sudo[55922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:30 compute-0 python3.9[55924]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:19:31 compute-0 sudo[55922]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:32 compute-0 sudo[56075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsetyvzcocjumcwvvkljyxbfdbhcwnue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037571.8049164-416-249883534623695/AnsiballZ_setup.py'
Jan 21 23:19:32 compute-0 sudo[56075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:32 compute-0 python3.9[56077]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:19:32 compute-0 sudo[56075]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:33 compute-0 sudo[56229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alrowjkuumhkhgnarfwfwcconappzytj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037572.6972904-440-111778081365213/AnsiballZ_stat.py'
Jan 21 23:19:33 compute-0 sudo[56229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:33 compute-0 python3.9[56231]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:19:33 compute-0 sudo[56229]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:33 compute-0 sudo[56381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mshochzfacvbyyhqfvzgpbfvzysxkugk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037573.6966588-467-16816960465990/AnsiballZ_stat.py'
Jan 21 23:19:33 compute-0 sudo[56381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:34 compute-0 python3.9[56383]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:19:34 compute-0 sudo[56381]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:34 compute-0 sudo[56533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mekkypvfebssefseedcpwgcopkxrjxlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037574.5882356-497-81383900567052/AnsiballZ_command.py'
Jan 21 23:19:34 compute-0 sudo[56533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:35 compute-0 python3.9[56535]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:19:35 compute-0 sudo[56533]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:36 compute-0 sudo[56686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brlcmybksdkhtxdacuvofmgefbxyngnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037575.5381517-527-44551197915198/AnsiballZ_service_facts.py'
Jan 21 23:19:36 compute-0 sudo[56686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:36 compute-0 python3.9[56688]: ansible-service_facts Invoked
Jan 21 23:19:36 compute-0 network[56705]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 23:19:36 compute-0 network[56706]: 'network-scripts' will be removed from distribution in near future.
Jan 21 23:19:36 compute-0 network[56707]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 23:19:40 compute-0 sudo[56686]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:41 compute-0 sudo[56990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbcqrwgomzxajxqrwkojhvergxvhukbj ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769037581.3355172-572-278121095344468/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769037581.3355172-572-278121095344468/args'
Jan 21 23:19:41 compute-0 sudo[56990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:41 compute-0 sudo[56990]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:42 compute-0 sudo[57157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knpzgzwbyicbhzkcyvriktlgvjlfszww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037582.1287868-605-93552599638829/AnsiballZ_dnf.py'
Jan 21 23:19:42 compute-0 sudo[57157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:42 compute-0 python3.9[57159]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:19:43 compute-0 sudo[57157]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:45 compute-0 sudo[57310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kootiwzbhvxkpjyftxcielmvqscfqxdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037584.4728677-644-228231048605712/AnsiballZ_package_facts.py'
Jan 21 23:19:45 compute-0 sudo[57310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:45 compute-0 python3.9[57312]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 21 23:19:45 compute-0 sudo[57310]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:46 compute-0 sudo[57462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtyyqaxkncsbvbtezgshrjlxzfjxjawi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037586.3394923-674-94822886605205/AnsiballZ_stat.py'
Jan 21 23:19:46 compute-0 sudo[57462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:46 compute-0 python3.9[57464]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:19:47 compute-0 sudo[57462]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:47 compute-0 sudo[57587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfouxuhuxrrnaqoqubqydjnegzjwgiaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037586.3394923-674-94822886605205/AnsiballZ_copy.py'
Jan 21 23:19:47 compute-0 sudo[57587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:47 compute-0 python3.9[57589]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769037586.3394923-674-94822886605205/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:19:47 compute-0 sudo[57587]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:48 compute-0 sudo[57741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxuccdteruetjlojxdaudopkezeezzvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037588.121137-719-189695671760054/AnsiballZ_stat.py'
Jan 21 23:19:48 compute-0 sudo[57741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:48 compute-0 python3.9[57743]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:19:48 compute-0 sudo[57741]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:49 compute-0 sudo[57866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnfttilpsfpzkigqwptsuxzxcvucgpow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037588.121137-719-189695671760054/AnsiballZ_copy.py'
Jan 21 23:19:49 compute-0 sudo[57866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:49 compute-0 python3.9[57868]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769037588.121137-719-189695671760054/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:19:49 compute-0 sudo[57866]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:50 compute-0 sudo[58020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzrwcjaynnlwjxnssvkowsiqfpxhnaci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037590.3078878-782-165350782192896/AnsiballZ_lineinfile.py'
Jan 21 23:19:50 compute-0 sudo[58020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:51 compute-0 python3.9[58022]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:19:51 compute-0 sudo[58020]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:52 compute-0 sudo[58174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xodzduhavyerkzykpzsqmibganhbwhfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037592.2311172-827-99360950143644/AnsiballZ_setup.py'
Jan 21 23:19:52 compute-0 sudo[58174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:52 compute-0 python3.9[58176]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:19:53 compute-0 sudo[58174]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:53 compute-0 sudo[58258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phzvaargkyjpqzmyajwskaqurilcowtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037592.2311172-827-99360950143644/AnsiballZ_systemd.py'
Jan 21 23:19:53 compute-0 sudo[58258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:54 compute-0 python3.9[58260]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:19:54 compute-0 sudo[58258]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:55 compute-0 sudo[58412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axoqvmwkympoiyhzkkuidkffrrazaaoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037594.9460895-875-147670632140441/AnsiballZ_setup.py'
Jan 21 23:19:55 compute-0 sudo[58412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:55 compute-0 python3.9[58414]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:19:55 compute-0 sudo[58412]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:56 compute-0 sudo[58496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvlmydyykkfwjdgunufnbccflasenvja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037594.9460895-875-147670632140441/AnsiballZ_systemd.py'
Jan 21 23:19:56 compute-0 sudo[58496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:19:56 compute-0 python3.9[58498]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:19:56 compute-0 chronyd[795]: chronyd exiting
Jan 21 23:19:56 compute-0 systemd[1]: Stopping NTP client/server...
Jan 21 23:19:56 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 21 23:19:56 compute-0 systemd[1]: Stopped NTP client/server.
Jan 21 23:19:56 compute-0 systemd[1]: Starting NTP client/server...
Jan 21 23:19:56 compute-0 chronyd[58506]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 21 23:19:56 compute-0 chronyd[58506]: Frequency -31.034 +/- 1.285 ppm read from /var/lib/chrony/drift
Jan 21 23:19:56 compute-0 chronyd[58506]: Loaded seccomp filter (level 2)
Jan 21 23:19:56 compute-0 systemd[1]: Started NTP client/server.
Jan 21 23:19:56 compute-0 sudo[58496]: pam_unix(sudo:session): session closed for user root
Jan 21 23:19:57 compute-0 sshd-session[53559]: Connection closed by 192.168.122.30 port 53912
Jan 21 23:19:57 compute-0 sshd-session[53556]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:19:57 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 21 23:19:57 compute-0 systemd[1]: session-12.scope: Consumed 28.044s CPU time.
Jan 21 23:19:57 compute-0 systemd-logind[786]: Session 12 logged out. Waiting for processes to exit.
Jan 21 23:19:57 compute-0 systemd-logind[786]: Removed session 12.
Jan 21 23:20:02 compute-0 sshd-session[58532]: Accepted publickey for zuul from 192.168.122.30 port 43658 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:20:02 compute-0 systemd-logind[786]: New session 13 of user zuul.
Jan 21 23:20:02 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 21 23:20:02 compute-0 sshd-session[58532]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:20:03 compute-0 sudo[58685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sujeqfbomumngcatehuptugwqlxqxkgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037603.0068417-26-159531136835127/AnsiballZ_file.py'
Jan 21 23:20:03 compute-0 sudo[58685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:03 compute-0 python3.9[58687]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:03 compute-0 sudo[58685]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:04 compute-0 sudo[58837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ochwykxirzzxjaaneabibensvoaoqxae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037604.0465176-62-179982190386377/AnsiballZ_stat.py'
Jan 21 23:20:04 compute-0 sudo[58837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:04 compute-0 python3.9[58839]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:20:04 compute-0 sudo[58837]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:05 compute-0 sudo[58960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwhwyjtqvtdsocrhjmcnchjudyveqogg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037604.0465176-62-179982190386377/AnsiballZ_copy.py'
Jan 21 23:20:05 compute-0 sudo[58960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:05 compute-0 python3.9[58962]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769037604.0465176-62-179982190386377/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:05 compute-0 sudo[58960]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:05 compute-0 sshd-session[58535]: Connection closed by 192.168.122.30 port 43658
Jan 21 23:20:05 compute-0 sshd-session[58532]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:20:05 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 21 23:20:05 compute-0 systemd[1]: session-13.scope: Consumed 1.805s CPU time.
Jan 21 23:20:05 compute-0 systemd-logind[786]: Session 13 logged out. Waiting for processes to exit.
Jan 21 23:20:05 compute-0 systemd-logind[786]: Removed session 13.
Jan 21 23:20:10 compute-0 sshd-session[58987]: Accepted publickey for zuul from 192.168.122.30 port 60714 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:20:10 compute-0 systemd-logind[786]: New session 14 of user zuul.
Jan 21 23:20:10 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 21 23:20:10 compute-0 sshd-session[58987]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:20:11 compute-0 python3.9[59140]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:20:13 compute-0 sudo[59294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dglgxgsjhubellyofyttdvuvqurhdjoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037612.5344415-59-181040556471056/AnsiballZ_file.py'
Jan 21 23:20:13 compute-0 sudo[59294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:13 compute-0 python3.9[59296]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:13 compute-0 sudo[59294]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:14 compute-0 sudo[59469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldrzmclagtmsssiedybhukbguujspful ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037613.5146616-83-36922057320734/AnsiballZ_stat.py'
Jan 21 23:20:14 compute-0 sudo[59469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:14 compute-0 python3.9[59471]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:20:14 compute-0 sudo[59469]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:14 compute-0 sudo[59592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqpcakapbdjflerybhvxqdnvsybqxsqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037613.5146616-83-36922057320734/AnsiballZ_copy.py'
Jan 21 23:20:14 compute-0 sudo[59592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:15 compute-0 python3.9[59594]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769037613.5146616-83-36922057320734/.source.json _original_basename=.8c9mad4c follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:15 compute-0 sudo[59592]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:16 compute-0 sudo[59744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovbfsfzjoqkhsilrjbcrjromcestiwke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037615.6505444-152-57168493123000/AnsiballZ_stat.py'
Jan 21 23:20:16 compute-0 sudo[59744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:16 compute-0 python3.9[59746]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:20:16 compute-0 sudo[59744]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:16 compute-0 sudo[59867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmoupuwroejzlskxztajbofwnmgeycvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037615.6505444-152-57168493123000/AnsiballZ_copy.py'
Jan 21 23:20:16 compute-0 sudo[59867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:16 compute-0 python3.9[59869]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769037615.6505444-152-57168493123000/.source _original_basename=.wg5iwfsn follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:16 compute-0 sudo[59867]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:17 compute-0 sudo[60019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-talfrppxlbmcgltxuzzidjejzowrvqne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037617.206901-200-28390637606856/AnsiballZ_file.py'
Jan 21 23:20:17 compute-0 sudo[60019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:17 compute-0 python3.9[60021]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:20:17 compute-0 sudo[60019]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:18 compute-0 sudo[60171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwkepbkvjlgtxzwaxgmkjldwevjqxmsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037618.0383184-224-106469388777883/AnsiballZ_stat.py'
Jan 21 23:20:18 compute-0 sudo[60171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:18 compute-0 python3.9[60173]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:20:18 compute-0 sudo[60171]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:19 compute-0 sudo[60294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrhdoobnkbgcjponlqwkembliindeaxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037618.0383184-224-106469388777883/AnsiballZ_copy.py'
Jan 21 23:20:19 compute-0 sudo[60294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:19 compute-0 python3.9[60296]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769037618.0383184-224-106469388777883/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:20:19 compute-0 sudo[60294]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:19 compute-0 sudo[60446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykbkxithnkylwasxulzrktqcquimrbwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037619.4201725-224-128205255258723/AnsiballZ_stat.py'
Jan 21 23:20:19 compute-0 sudo[60446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:19 compute-0 python3.9[60448]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:20:19 compute-0 sudo[60446]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:20 compute-0 sudo[60569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-benvttuwayauqcaixxudmrzjnflbnzhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037619.4201725-224-128205255258723/AnsiballZ_copy.py'
Jan 21 23:20:20 compute-0 sudo[60569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:20 compute-0 python3.9[60571]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769037619.4201725-224-128205255258723/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:20:20 compute-0 sudo[60569]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:21 compute-0 sudo[60721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmekovtwnwmeseprccodhfcqepcznfxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037620.827719-311-261381355262661/AnsiballZ_file.py'
Jan 21 23:20:21 compute-0 sudo[60721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:21 compute-0 python3.9[60723]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:21 compute-0 sudo[60721]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:21 compute-0 sudo[60873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypedyjrounanviaqtjznovmcxjaknwhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037621.576426-335-35182212208760/AnsiballZ_stat.py'
Jan 21 23:20:21 compute-0 sudo[60873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:22 compute-0 python3.9[60875]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:20:22 compute-0 sudo[60873]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:22 compute-0 sudo[60996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncbwlwnakrcjfgzvqkmwowivyvsnurci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037621.576426-335-35182212208760/AnsiballZ_copy.py'
Jan 21 23:20:22 compute-0 sudo[60996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:22 compute-0 python3.9[60998]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769037621.576426-335-35182212208760/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:22 compute-0 sudo[60996]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:23 compute-0 sudo[61148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dscllhumhyadtcwdlcdbauisnadnqyon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037622.9307036-380-18747347512874/AnsiballZ_stat.py'
Jan 21 23:20:23 compute-0 sudo[61148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:23 compute-0 python3.9[61150]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:20:23 compute-0 sudo[61148]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:23 compute-0 sudo[61271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdxwsamymumvlyxbwzhqmojudhadpsrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037622.9307036-380-18747347512874/AnsiballZ_copy.py'
Jan 21 23:20:23 compute-0 sudo[61271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:24 compute-0 python3.9[61273]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769037622.9307036-380-18747347512874/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:24 compute-0 sudo[61271]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:25 compute-0 sudo[61423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyegqqzrjzxzgrmsmbmdpzkcqjivbdiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037624.367626-425-229184818782485/AnsiballZ_systemd.py'
Jan 21 23:20:25 compute-0 sudo[61423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:25 compute-0 python3.9[61425]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:20:25 compute-0 systemd[1]: Reloading.
Jan 21 23:20:25 compute-0 systemd-sysv-generator[61455]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:20:25 compute-0 systemd-rc-local-generator[61451]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:20:25 compute-0 systemd[1]: Reloading.
Jan 21 23:20:25 compute-0 systemd-rc-local-generator[61485]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:20:25 compute-0 systemd-sysv-generator[61488]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:20:25 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 21 23:20:25 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 21 23:20:25 compute-0 sudo[61423]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:26 compute-0 sudo[61652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uunjeuclekpkhdmnrrytofpxmuyjsdmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037626.163977-449-119943350288937/AnsiballZ_stat.py'
Jan 21 23:20:26 compute-0 sudo[61652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:26 compute-0 python3.9[61654]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:20:26 compute-0 sudo[61652]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:27 compute-0 sudo[61775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daipbzrhnmjohyuzazdfnbhqeywtbkhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037626.163977-449-119943350288937/AnsiballZ_copy.py'
Jan 21 23:20:27 compute-0 sudo[61775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:27 compute-0 python3.9[61777]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769037626.163977-449-119943350288937/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:27 compute-0 sudo[61775]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:27 compute-0 sudo[61927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohlazblvusdcrgwmuxsujdgscokkqzhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037627.5539289-494-257857812239250/AnsiballZ_stat.py'
Jan 21 23:20:27 compute-0 sudo[61927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:28 compute-0 python3.9[61929]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:20:28 compute-0 sudo[61927]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:28 compute-0 sudo[62050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byntqhexufxdocbjqeeoambhciizuqby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037627.5539289-494-257857812239250/AnsiballZ_copy.py'
Jan 21 23:20:28 compute-0 sudo[62050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:28 compute-0 python3.9[62052]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769037627.5539289-494-257857812239250/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:28 compute-0 sudo[62050]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:29 compute-0 sudo[62202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqjttscxksspywrcmbfvqtvwoptrpbdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037629.156718-539-206627998527111/AnsiballZ_systemd.py'
Jan 21 23:20:29 compute-0 sudo[62202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:29 compute-0 python3.9[62204]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:20:29 compute-0 systemd[1]: Reloading.
Jan 21 23:20:29 compute-0 systemd-rc-local-generator[62232]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:20:29 compute-0 systemd-sysv-generator[62235]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:20:30 compute-0 systemd[1]: Reloading.
Jan 21 23:20:30 compute-0 systemd-sysv-generator[62272]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:20:30 compute-0 systemd-rc-local-generator[62265]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:20:30 compute-0 systemd[1]: Starting Create netns directory...
Jan 21 23:20:30 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 21 23:20:30 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 21 23:20:30 compute-0 systemd[1]: Finished Create netns directory.
Jan 21 23:20:30 compute-0 sudo[62202]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:31 compute-0 python3.9[62430]: ansible-ansible.builtin.service_facts Invoked
Jan 21 23:20:31 compute-0 network[62447]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 23:20:31 compute-0 network[62448]: 'network-scripts' will be removed from distribution in near future.
Jan 21 23:20:31 compute-0 network[62449]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 23:20:35 compute-0 sudo[62709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hunergosegbsephingalawnpfgcaqojk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037635.63829-587-67264162841582/AnsiballZ_systemd.py'
Jan 21 23:20:35 compute-0 sudo[62709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:36 compute-0 python3.9[62711]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:20:36 compute-0 systemd[1]: Reloading.
Jan 21 23:20:36 compute-0 systemd-rc-local-generator[62742]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:20:36 compute-0 systemd-sysv-generator[62745]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:20:36 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 21 23:20:36 compute-0 iptables.init[62752]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 21 23:20:36 compute-0 iptables.init[62752]: iptables: Flushing firewall rules: [  OK  ]
Jan 21 23:20:36 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 21 23:20:36 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 21 23:20:36 compute-0 sudo[62709]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:37 compute-0 sudo[62947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjjjnrlbcferkokxkccmnncbtbksmfyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037637.1408975-587-236566501672384/AnsiballZ_systemd.py'
Jan 21 23:20:37 compute-0 sudo[62947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:37 compute-0 python3.9[62949]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:20:37 compute-0 sudo[62947]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:38 compute-0 sudo[63101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daizrhaunqxltyukvoevrehuewdlpixs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037638.1957276-635-252418666297460/AnsiballZ_systemd.py'
Jan 21 23:20:38 compute-0 sudo[63101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:38 compute-0 python3.9[63103]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:20:38 compute-0 systemd[1]: Reloading.
Jan 21 23:20:38 compute-0 systemd-rc-local-generator[63132]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:20:38 compute-0 systemd-sysv-generator[63135]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:20:39 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 21 23:20:39 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 21 23:20:39 compute-0 sudo[63101]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:39 compute-0 sudo[63293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlejztutmrvhibptkuakgdcxjbscqzxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037639.4588401-659-143352280072093/AnsiballZ_command.py'
Jan 21 23:20:39 compute-0 sudo[63293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:40 compute-0 python3.9[63295]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:20:40 compute-0 sudo[63293]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:41 compute-0 sudo[63446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbyslyuiejbqkenqhsjfmixlzqwgwrfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037640.9380574-701-153214363503187/AnsiballZ_stat.py'
Jan 21 23:20:41 compute-0 sudo[63446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:41 compute-0 python3.9[63448]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:20:41 compute-0 sudo[63446]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:41 compute-0 sudo[63571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yswypnvitlojnxfpnqnxtxhbpdlimvff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037640.9380574-701-153214363503187/AnsiballZ_copy.py'
Jan 21 23:20:41 compute-0 sudo[63571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:42 compute-0 python3.9[63573]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769037640.9380574-701-153214363503187/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:42 compute-0 sudo[63571]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:42 compute-0 sudo[63724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqdzeggogxtimdyimymbmdyaqggninup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037642.339074-746-33381407482080/AnsiballZ_systemd.py'
Jan 21 23:20:42 compute-0 sudo[63724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:42 compute-0 python3.9[63726]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:20:43 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 21 23:20:43 compute-0 sshd[1007]: Received SIGHUP; restarting.
Jan 21 23:20:43 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 21 23:20:43 compute-0 sshd[1007]: Server listening on 0.0.0.0 port 22.
Jan 21 23:20:43 compute-0 sshd[1007]: Server listening on :: port 22.
Jan 21 23:20:43 compute-0 sudo[63724]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:43 compute-0 sudo[63880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuzwekrneabjvupiurgnnlsvnvgjcxik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037643.3442464-770-193554740719752/AnsiballZ_file.py'
Jan 21 23:20:43 compute-0 sudo[63880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:43 compute-0 python3.9[63882]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:43 compute-0 sudo[63880]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:44 compute-0 sudo[64032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsgcxysngzhkifkharwoknkcpcrwlgzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037644.1934974-794-98805276527020/AnsiballZ_stat.py'
Jan 21 23:20:44 compute-0 sudo[64032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:44 compute-0 python3.9[64034]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:20:44 compute-0 sudo[64032]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:45 compute-0 sudo[64155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwlcbjoawjfstgpdxatilhyxfdbjlzpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037644.1934974-794-98805276527020/AnsiballZ_copy.py'
Jan 21 23:20:45 compute-0 sudo[64155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:45 compute-0 python3.9[64157]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769037644.1934974-794-98805276527020/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:45 compute-0 sudo[64155]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:46 compute-0 sudo[64307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tblimbtuuonvhrivxoufisjtwrkviryd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037645.8768857-848-135293892509096/AnsiballZ_timezone.py'
Jan 21 23:20:46 compute-0 sudo[64307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:46 compute-0 python3.9[64309]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 21 23:20:46 compute-0 systemd[1]: Starting Time & Date Service...
Jan 21 23:20:46 compute-0 systemd[1]: Started Time & Date Service.
Jan 21 23:20:46 compute-0 sudo[64307]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:48 compute-0 sudo[64463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tedyjbfxjidzykakbifzanettagmyfxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037648.0921435-875-88719217847597/AnsiballZ_file.py'
Jan 21 23:20:48 compute-0 sudo[64463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:48 compute-0 python3.9[64465]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:48 compute-0 sudo[64463]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:49 compute-0 sudo[64615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdmngytirggqqxulhkbscfxltsnbqaxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037649.0370405-899-194441571951647/AnsiballZ_stat.py'
Jan 21 23:20:49 compute-0 sudo[64615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:49 compute-0 python3.9[64617]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:20:49 compute-0 sudo[64615]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:50 compute-0 sudo[64738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crjntvyjgvyerwbobwygqgjhxianekwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037649.0370405-899-194441571951647/AnsiballZ_copy.py'
Jan 21 23:20:50 compute-0 sudo[64738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:50 compute-0 python3.9[64740]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769037649.0370405-899-194441571951647/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:50 compute-0 sudo[64738]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:50 compute-0 sudo[64890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbieqzzyqayfbbkoovcyfhhfzdgfodba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037650.446941-944-19053616756323/AnsiballZ_stat.py'
Jan 21 23:20:50 compute-0 sudo[64890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:50 compute-0 python3.9[64892]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:20:50 compute-0 sudo[64890]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:51 compute-0 sudo[65013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpujkksqxwgmlmadpzptfhlqyppakkvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037650.446941-944-19053616756323/AnsiballZ_copy.py'
Jan 21 23:20:51 compute-0 sudo[65013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:51 compute-0 python3.9[65015]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769037650.446941-944-19053616756323/.source.yaml _original_basename=.v4dkwsc5 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:51 compute-0 sudo[65013]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:52 compute-0 sudo[65165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rymloseteqirfxbdxjacsleflhowokwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037651.8259435-989-268909229059883/AnsiballZ_stat.py'
Jan 21 23:20:52 compute-0 sudo[65165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:52 compute-0 python3.9[65167]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:20:52 compute-0 sudo[65165]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:52 compute-0 sudo[65288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yssneniytfglflyhcyelcfrnunvxther ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037651.8259435-989-268909229059883/AnsiballZ_copy.py'
Jan 21 23:20:52 compute-0 sudo[65288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:53 compute-0 python3.9[65290]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769037651.8259435-989-268909229059883/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:53 compute-0 sudo[65288]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:53 compute-0 sudo[65440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqxdnlbezilueyxloacbrtqaaugpgdvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037653.42905-1034-255933010510828/AnsiballZ_command.py'
Jan 21 23:20:53 compute-0 sudo[65440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:53 compute-0 python3.9[65442]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:20:53 compute-0 sudo[65440]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:54 compute-0 sudo[65593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rttfqksxbsrekizdlqeunpwtgihiepun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037654.2706444-1058-229846305230575/AnsiballZ_command.py'
Jan 21 23:20:54 compute-0 sudo[65593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:54 compute-0 python3.9[65595]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:20:54 compute-0 sudo[65593]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:55 compute-0 sudo[65746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzbfzoxywjjikroninfcobbhijansedg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769037655.1210976-1082-145899896239451/AnsiballZ_edpm_nftables_from_files.py'
Jan 21 23:20:55 compute-0 sudo[65746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:55 compute-0 python3[65748]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 21 23:20:55 compute-0 sudo[65746]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:56 compute-0 sudo[65898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qogqvkntuudvfrrykwnhkwtyplysqcly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037656.0619664-1106-69472762517072/AnsiballZ_stat.py'
Jan 21 23:20:56 compute-0 sudo[65898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:56 compute-0 python3.9[65900]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:20:56 compute-0 sudo[65898]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:56 compute-0 sudo[66021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arghfsvpaozymvarjtsqpdjflgikpivc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037656.0619664-1106-69472762517072/AnsiballZ_copy.py'
Jan 21 23:20:56 compute-0 sudo[66021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:57 compute-0 python3.9[66023]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769037656.0619664-1106-69472762517072/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:57 compute-0 sudo[66021]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:57 compute-0 sudo[66173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvesrtmloetmzlczhifmwsevrmihmebg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037657.546325-1151-26939755760248/AnsiballZ_stat.py'
Jan 21 23:20:57 compute-0 sudo[66173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:58 compute-0 python3.9[66175]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:20:58 compute-0 sudo[66173]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:58 compute-0 sudo[66296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vthcshzptbhcjfkmdpdredgrqjxtsgld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037657.546325-1151-26939755760248/AnsiballZ_copy.py'
Jan 21 23:20:58 compute-0 sudo[66296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:58 compute-0 python3.9[66298]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769037657.546325-1151-26939755760248/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:20:58 compute-0 sudo[66296]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:59 compute-0 sudo[66448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zumiykjhoobsvwjwdlmybnrynjetiyno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037659.0565004-1196-84981289461597/AnsiballZ_stat.py'
Jan 21 23:20:59 compute-0 sudo[66448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:20:59 compute-0 python3.9[66450]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:20:59 compute-0 sudo[66448]: pam_unix(sudo:session): session closed for user root
Jan 21 23:20:59 compute-0 sudo[66571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwxomosivoqxdwkdpshisimvhmhsctel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037659.0565004-1196-84981289461597/AnsiballZ_copy.py'
Jan 21 23:20:59 compute-0 sudo[66571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:00 compute-0 python3.9[66573]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769037659.0565004-1196-84981289461597/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:21:00 compute-0 sudo[66571]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:00 compute-0 sudo[66723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwodsxkycdtiblwgnunbijdjgjhqqnwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037660.4436326-1241-58825948680019/AnsiballZ_stat.py'
Jan 21 23:21:00 compute-0 sudo[66723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:00 compute-0 python3.9[66725]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:21:00 compute-0 sudo[66723]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:01 compute-0 sudo[66846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcobzzhgtxpncbdzbgjigivjhxctzxeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037660.4436326-1241-58825948680019/AnsiballZ_copy.py'
Jan 21 23:21:01 compute-0 sudo[66846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:01 compute-0 python3.9[66848]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769037660.4436326-1241-58825948680019/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:21:01 compute-0 sudo[66846]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:02 compute-0 sudo[66998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecwwodtxfquidbwzmdqgtwouqjgbacqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037661.9551337-1286-200854589924576/AnsiballZ_stat.py'
Jan 21 23:21:02 compute-0 sudo[66998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:02 compute-0 python3.9[67000]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:21:02 compute-0 sudo[66998]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:02 compute-0 sudo[67121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ainwidwfwhqglxvkoxovrlweixamdute ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037661.9551337-1286-200854589924576/AnsiballZ_copy.py'
Jan 21 23:21:02 compute-0 sudo[67121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:03 compute-0 python3.9[67123]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769037661.9551337-1286-200854589924576/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:21:03 compute-0 sudo[67121]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:03 compute-0 sudo[67273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awhzgidiuzhtqeswvekauaalivdafcon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037663.4195065-1331-64201568622797/AnsiballZ_file.py'
Jan 21 23:21:03 compute-0 sudo[67273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:03 compute-0 python3.9[67275]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:21:03 compute-0 sudo[67273]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:04 compute-0 sudo[67425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txcxndoawkvfftxjskdtqahvcushqwrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037664.2739136-1355-24778831857386/AnsiballZ_command.py'
Jan 21 23:21:04 compute-0 sudo[67425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:04 compute-0 python3.9[67427]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:21:04 compute-0 sudo[67425]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:05 compute-0 sudo[67584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekhsrcwywvtuzthfuluphfdiyqbqdbkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037665.1959221-1379-134237030041519/AnsiballZ_blockinfile.py'
Jan 21 23:21:05 compute-0 sudo[67584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:05 compute-0 python3.9[67586]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:21:06 compute-0 sudo[67584]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:06 compute-0 sudo[67737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qukdbllbdezvtpxjzepeicopumjupdiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037666.3150837-1406-193320192389307/AnsiballZ_file.py'
Jan 21 23:21:06 compute-0 sudo[67737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:06 compute-0 python3.9[67739]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:21:06 compute-0 sudo[67737]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:07 compute-0 sudo[67889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akvilffdfwngcfwzxcskcumttconktgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037667.0771534-1406-186426201253773/AnsiballZ_file.py'
Jan 21 23:21:07 compute-0 sudo[67889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:07 compute-0 python3.9[67891]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:21:07 compute-0 sudo[67889]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:08 compute-0 sudo[68041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-casbutaxeadqdnhvviutuwdttjsvocsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037667.9566295-1451-62507622157665/AnsiballZ_mount.py'
Jan 21 23:21:08 compute-0 sudo[68041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:08 compute-0 python3.9[68043]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 21 23:21:08 compute-0 sudo[68041]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:09 compute-0 sudo[68194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrcmcqjtvliuxyxygkgryrxftzoqubqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037668.81279-1451-240987863674366/AnsiballZ_mount.py'
Jan 21 23:21:09 compute-0 sudo[68194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:09 compute-0 python3.9[68196]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 21 23:21:09 compute-0 sudo[68194]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:09 compute-0 sshd-session[58990]: Connection closed by 192.168.122.30 port 60714
Jan 21 23:21:09 compute-0 sshd-session[58987]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:21:09 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 21 23:21:09 compute-0 systemd[1]: session-14.scope: Consumed 39.401s CPU time.
Jan 21 23:21:09 compute-0 systemd-logind[786]: Session 14 logged out. Waiting for processes to exit.
Jan 21 23:21:09 compute-0 systemd-logind[786]: Removed session 14.
Jan 21 23:21:15 compute-0 sshd-session[68222]: Accepted publickey for zuul from 192.168.122.30 port 38108 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:21:15 compute-0 systemd-logind[786]: New session 15 of user zuul.
Jan 21 23:21:15 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 21 23:21:15 compute-0 sshd-session[68222]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:21:16 compute-0 sudo[68375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlwpwdhyppggldzbzzdjuxdasmvhovph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037675.6006613-23-5189017331012/AnsiballZ_tempfile.py'
Jan 21 23:21:16 compute-0 sudo[68375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:16 compute-0 python3.9[68377]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 21 23:21:16 compute-0 sudo[68375]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:16 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 21 23:21:17 compute-0 sudo[68529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwwzsyrqrscpmmtjqxmqnfrdswfzpzyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037676.5379927-59-93570125740808/AnsiballZ_stat.py'
Jan 21 23:21:17 compute-0 sudo[68529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:17 compute-0 python3.9[68531]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:21:17 compute-0 sudo[68529]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:18 compute-0 sudo[68681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkpkhhwtkouhdsnxpwadimjfgzllyijd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037677.5440578-89-196153870545488/AnsiballZ_setup.py'
Jan 21 23:21:18 compute-0 sudo[68681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:18 compute-0 python3.9[68683]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:21:18 compute-0 sudo[68681]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:19 compute-0 sudo[68833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psyzfdetpwltbxgktkncbiritvkprjkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037678.7547095-114-110342817175949/AnsiballZ_blockinfile.py'
Jan 21 23:21:19 compute-0 sudo[68833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:19 compute-0 python3.9[68835]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6zIwRdzuVPSMYHryNuK9eshVk/94AdKSgczxgPpAUAgv1pRdk1RrZxNhBhnpleF1/WCOT3PGscfxf/xua2WZKIZe0Qb1MOHOok2+eI5T7qv3bh7JsxcGnnpHvypsZIC6uaEmQu8mt+yBg9IJcFDJNwOkM+LyWbF4jRxU32MW//D7snXiyYKce7U5n921ZpxWpX0wQpiGSvvhVaSKgjJ12Qm30AfCwc9Gl/dwJ+8SB/VKfcPK5dGnaKteOlDj32FuT5VwsZRTuwmLsXZEjTwzbJbx32BZD+MOVGVlsT2BzorpcSbGf3yJh/qNmuRQLEBR8QcgTOQ8nh/e1hHXpfpg2liVLFbQtnRLaT+Ag65R8Tau6cHlQisMu5YBmFvY9q7EACrxe7Uavv7N19DoAG1AJejelEEReYaGNzIddWd8jLxM/c5UWsFVHZYuOqlvA9pQLCgFIeOQZkLRQnB32SoCptagf95NlDEARDFeCjQQjTCIRd22xbCDCDk47B6blY0k=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINHR/2fehxatNJgT9VzNjvKNTWkTHFG641eICQ8hedGu
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBp3CfTIdOEoh91MPd1RH3hVEuEee5LbmruYGsmGAX+dvECmqm9iE9VXKTlo8wPu5sj6SzmxIcTnNG3XoPMq2SE=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCcTCSmBDjNHSZfaHG8fKTwJ4GF2jXfxozwo4mW5kcz/MAe+h5bas2d3r9FavORp/q67J4ZkPP2YsZprpzH/cCMpCJy4msytgeGplSBQmMw4Mybm9FemjlDMz+p8hES75I/8Lsrn0hI2jnW06F3l2pmJ3lg6xHUBqBTbLCh9S5FEHDnzzBfekLREeN4Vo8hRbDxXVEf1J/9OrEtSgNBBGVlAX8166VfPo2u+DIPXKcYFO80JpSHMFkcAGwQKkiBzVg18RmbA0LZVc2J659He3C8sLe01q8pTBbmtS9OaAWL27r9vC5f+yYkt/b+aHborYPFYHzyXpO7qNx28Aq5S4eFs7susZNV1FTL2beXRfOlYBLrFwy95VtxeFQi/OwO3YX8jhPpv08c9BY2+U7t3+kXcRQpbYcNnryIdCrUQ22eqogka311YLSnGbaPXjBMygOMU3wsKYpFSVMEXeT0Bg/ZNhaAkD9NNVE8+EE5ycnJTe1l4czVuAEmGwypQ1HgGok=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOfVjUuuW00Wki8wzseTLka/NNgXZv01yFssrjqPd+vx
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFxAU6jnvKfeGnQCjanLn0gpiYTpeExRBIXO5JrMYzMY98jAeCG9Lktt11h9g/CH/mue3MKLaP3lm3xf41m6zbk=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD7OVoFJg81ARdVJA6FyjUI977hlEvtniq2MXKgT3+nUajJ/3zk0XrH9mqvnc7jGz1Fq9+A4wkfRZtVnIrSpwkWbHn3JVL+1mcHJJ6dVIN4pspwgMzeYWm8GG4IYxREKgFCO78ae7vy8DLO9Yi4L+xt6d8Uni8chzNjGMRPdF4FSt+CXwzwGzOQJML3t+bTWLuZRnYroDhrVD0w4AlD+nalMPzvjpAzMn5ZQVTYkQ8sZR7AHw27yAtolX0jzmhql0UCKLUOmWMZxFbGWBTcLCT4COxHXJN+STZ0AbVq1vYG6dQJybeUzXYasq5HK7jx4CFgZTCROxv0lWjOXbN6QbVPVUhxl7tourrbcBhURHA2b9PYkDUIWGqbvaZRWT2PFnTFUx8TCdZZhJRdB+UuryMzpiQ/SHsWtLHR8EVChV7JhPjRfsGibqpF/aqGE9vdiOdM3Ropqlgqn8bSVdD2DPsuKl/UBu0CnLmqPBtozX7rBGvtP/vXyrstFdMWspO/tQs=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBA9Ham04cvw39gDVvgsX1L6qw86QKeK+eylBdUgm9ej
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKIaRu8jlZgKyVs6rhHSbKal+29RD+wf0CzqvOjZMOqqZElzcAyYT09MEy7bg54xF2mQd4qnfLLyE+7XxpD7dZY=
                                             create=True mode=0644 path=/tmp/ansible.d79q_sh5 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:21:19 compute-0 sudo[68833]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:20 compute-0 sudo[68985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxsoawdlmarlyngxvqczgjzhxugvlqby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037679.7042115-138-191878314677189/AnsiballZ_command.py'
Jan 21 23:21:20 compute-0 sudo[68985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:20 compute-0 python3.9[68987]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.d79q_sh5' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:21:20 compute-0 sudo[68985]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:21 compute-0 sudo[69139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-capsloyuflvxnsavmaxfywuzqpldbaxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037680.6246645-162-265808356667947/AnsiballZ_file.py'
Jan 21 23:21:21 compute-0 sudo[69139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:21 compute-0 python3.9[69141]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.d79q_sh5 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:21:21 compute-0 sudo[69139]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:21 compute-0 sshd-session[68225]: Connection closed by 192.168.122.30 port 38108
Jan 21 23:21:21 compute-0 sshd-session[68222]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:21:21 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 21 23:21:21 compute-0 systemd[1]: session-15.scope: Consumed 3.662s CPU time.
Jan 21 23:21:21 compute-0 systemd-logind[786]: Session 15 logged out. Waiting for processes to exit.
Jan 21 23:21:21 compute-0 systemd-logind[786]: Removed session 15.
Jan 21 23:21:26 compute-0 sshd-session[69166]: Accepted publickey for zuul from 192.168.122.30 port 47120 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:21:26 compute-0 systemd-logind[786]: New session 16 of user zuul.
Jan 21 23:21:26 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 21 23:21:26 compute-0 sshd-session[69166]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:21:27 compute-0 python3.9[69319]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:21:29 compute-0 sudo[69473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geegnjtygsisoetayjzobjnswwsaoliw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037688.346735-56-255273291509437/AnsiballZ_systemd.py'
Jan 21 23:21:29 compute-0 sudo[69473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:29 compute-0 python3.9[69475]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 21 23:21:29 compute-0 sudo[69473]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:30 compute-0 sudo[69627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awheubrzxgsmvqrdxqbijgrjhkvnurvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037690.2590103-80-168557262058360/AnsiballZ_systemd.py'
Jan 21 23:21:30 compute-0 sudo[69627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:30 compute-0 python3.9[69629]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:21:30 compute-0 sudo[69627]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:31 compute-0 sudo[69780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pasfbmtctxmodejvkerfpjtbrmeyhhfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037691.163943-107-244739202004530/AnsiballZ_command.py'
Jan 21 23:21:31 compute-0 sudo[69780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:31 compute-0 python3.9[69782]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:21:31 compute-0 sudo[69780]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:32 compute-0 sudo[69933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgliraawliffsonvgaufjgwihkimdyqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037692.1096466-131-25230876908810/AnsiballZ_stat.py'
Jan 21 23:21:32 compute-0 sudo[69933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:32 compute-0 python3.9[69935]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:21:32 compute-0 sudo[69933]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:33 compute-0 sudo[70087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjylytpnhtsoenbfaojjbvumfdbmhtdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037693.2803533-155-102005498384978/AnsiballZ_command.py'
Jan 21 23:21:33 compute-0 sudo[70087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:33 compute-0 python3.9[70089]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:21:33 compute-0 sudo[70087]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:34 compute-0 sudo[70242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlvbaigodedjavmqpeqnywvumwitjmzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037694.262262-179-107948220786752/AnsiballZ_file.py'
Jan 21 23:21:34 compute-0 sudo[70242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:35 compute-0 python3.9[70244]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:21:35 compute-0 sudo[70242]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:35 compute-0 sshd-session[69169]: Connection closed by 192.168.122.30 port 47120
Jan 21 23:21:35 compute-0 sshd-session[69166]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:21:35 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 21 23:21:35 compute-0 systemd[1]: session-16.scope: Consumed 5.188s CPU time.
Jan 21 23:21:35 compute-0 systemd-logind[786]: Session 16 logged out. Waiting for processes to exit.
Jan 21 23:21:35 compute-0 systemd-logind[786]: Removed session 16.
Jan 21 23:21:41 compute-0 sshd-session[70269]: Accepted publickey for zuul from 192.168.122.30 port 47742 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:21:41 compute-0 systemd-logind[786]: New session 17 of user zuul.
Jan 21 23:21:41 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 21 23:21:41 compute-0 sshd-session[70269]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:21:42 compute-0 python3.9[70422]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:21:43 compute-0 sudo[70576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gztveivwtnxnvfdwiccwzatnurfimdkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037703.4372032-62-152339242261504/AnsiballZ_setup.py'
Jan 21 23:21:43 compute-0 sudo[70576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:44 compute-0 python3.9[70578]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:21:44 compute-0 sudo[70576]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:44 compute-0 sudo[70660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brbuhkllgrubwjnnmzlazdcqairmzaim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769037703.4372032-62-152339242261504/AnsiballZ_dnf.py'
Jan 21 23:21:44 compute-0 sudo[70660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:45 compute-0 python3.9[70662]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 21 23:21:46 compute-0 sudo[70660]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:47 compute-0 python3.9[70813]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:21:48 compute-0 python3.9[70964]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 23:21:49 compute-0 python3.9[71114]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:21:49 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 23:21:50 compute-0 python3.9[71265]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:21:51 compute-0 sshd-session[70272]: Connection closed by 192.168.122.30 port 47742
Jan 21 23:21:51 compute-0 sshd-session[70269]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:21:51 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 21 23:21:51 compute-0 systemd[1]: session-17.scope: Consumed 6.407s CPU time.
Jan 21 23:21:51 compute-0 systemd-logind[786]: Session 17 logged out. Waiting for processes to exit.
Jan 21 23:21:51 compute-0 systemd-logind[786]: Removed session 17.
Jan 21 23:21:58 compute-0 sshd-session[71290]: Accepted publickey for zuul from 38.102.83.184 port 54696 ssh2: RSA SHA256:gO0M839svU6fVamuNUCiB4QTUcucusiR8OAS6SArSuQ
Jan 21 23:21:58 compute-0 systemd-logind[786]: New session 18 of user zuul.
Jan 21 23:21:58 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 21 23:21:58 compute-0 sshd-session[71290]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:21:59 compute-0 sudo[71366]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izwcvlrwgnavbcuowcnghbbevkhrikpd ; /usr/bin/python3'
Jan 21 23:21:59 compute-0 sudo[71366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:21:59 compute-0 useradd[71370]: new group: name=ceph-admin, GID=42478
Jan 21 23:21:59 compute-0 useradd[71370]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 21 23:21:59 compute-0 sudo[71366]: pam_unix(sudo:session): session closed for user root
Jan 21 23:21:59 compute-0 sudo[71452]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hawmbayywcgbuwqgincyiuzzfedftfza ; /usr/bin/python3'
Jan 21 23:21:59 compute-0 sudo[71452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:00 compute-0 sudo[71452]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:00 compute-0 sudo[71525]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvtgmokcnpydvtxpcyxthvyennhxukwq ; /usr/bin/python3'
Jan 21 23:22:00 compute-0 sudo[71525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:00 compute-0 sudo[71525]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:01 compute-0 sudo[71575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjgcdwfeawzjemsgwirixjmclkbmqwnm ; /usr/bin/python3'
Jan 21 23:22:01 compute-0 sudo[71575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:01 compute-0 sudo[71575]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:01 compute-0 sudo[71601]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlzqxeneplwucpgaxnntnlhedhvelerx ; /usr/bin/python3'
Jan 21 23:22:01 compute-0 sudo[71601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:01 compute-0 sudo[71601]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:01 compute-0 sudo[71627]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvtydscpppeljmwbupqfquzjtpjpooku ; /usr/bin/python3'
Jan 21 23:22:01 compute-0 sudo[71627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:01 compute-0 sudo[71627]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:02 compute-0 sudo[71653]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhzjpdulcmxwouvioynvgsybzsszdysc ; /usr/bin/python3'
Jan 21 23:22:02 compute-0 sudo[71653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:02 compute-0 sudo[71653]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:02 compute-0 sudo[71731]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivwygqpljnfoqpgqywfotlfxmbnjbsgo ; /usr/bin/python3'
Jan 21 23:22:02 compute-0 sudo[71731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:03 compute-0 sudo[71731]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:03 compute-0 sudo[71804]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vozrwyrmakuykteuzwlzvwdzjxmxjopy ; /usr/bin/python3'
Jan 21 23:22:03 compute-0 sudo[71804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:03 compute-0 sudo[71804]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:03 compute-0 sudo[71906]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izgwzgqcljwxugemwamwuwyzonummwym ; /usr/bin/python3'
Jan 21 23:22:03 compute-0 sudo[71906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:04 compute-0 sudo[71906]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:04 compute-0 sudo[71979]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxyffziusjotxnjdtuvpaqibyubwblbd ; /usr/bin/python3'
Jan 21 23:22:04 compute-0 sudo[71979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:04 compute-0 sudo[71979]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:05 compute-0 sudo[72029]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciyrdwylcfwsffwqcodrjkfeesgtgunk ; /usr/bin/python3'
Jan 21 23:22:05 compute-0 sudo[72029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:05 compute-0 python3[72031]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:22:06 compute-0 chronyd[58506]: Selected source 23.133.168.245 (pool.ntp.org)
Jan 21 23:22:06 compute-0 sudo[72029]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:07 compute-0 sudo[72124]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nobjkvkldqjybvkskpoqobuqmomeyrxd ; /usr/bin/python3'
Jan 21 23:22:07 compute-0 sudo[72124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:07 compute-0 python3[72126]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 23:22:08 compute-0 sudo[72124]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:08 compute-0 sudo[72151]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgahsxbbkdcgvpkisbrhwhbxnmlbjidb ; /usr/bin/python3'
Jan 21 23:22:08 compute-0 sudo[72151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:08 compute-0 python3[72153]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 23:22:08 compute-0 sudo[72151]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:09 compute-0 sudo[72177]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiuwwyeizumjlxlwlhrakrpbvtszagvb ; /usr/bin/python3'
Jan 21 23:22:09 compute-0 sudo[72177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:09 compute-0 python3[72179]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:22:09 compute-0 kernel: loop: module loaded
Jan 21 23:22:09 compute-0 kernel: loop3: detected capacity change from 0 to 14680064
Jan 21 23:22:09 compute-0 sudo[72177]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:10 compute-0 sudo[72212]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amnjdrjjxlcajfbdpnqsilcqrlwzxcop ; /usr/bin/python3'
Jan 21 23:22:10 compute-0 sudo[72212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:10 compute-0 python3[72214]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:22:10 compute-0 lvm[72217]: PV /dev/loop3 not used.
Jan 21 23:22:10 compute-0 lvm[72226]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 23:22:10 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 21 23:22:10 compute-0 sudo[72212]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:10 compute-0 lvm[72228]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 21 23:22:10 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 21 23:22:11 compute-0 sudo[72304]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nehnxiaftjlqsmvbwydgieuvadrkhwof ; /usr/bin/python3'
Jan 21 23:22:11 compute-0 sudo[72304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:11 compute-0 python3[72306]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 23:22:11 compute-0 sudo[72304]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:11 compute-0 sudo[72377]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnpjpnrimwjhwarkeiuwqmdoovnpdhbm ; /usr/bin/python3'
Jan 21 23:22:11 compute-0 sudo[72377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:11 compute-0 python3[72379]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769037730.9233594-37002-108267053548463/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:22:11 compute-0 sudo[72377]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:12 compute-0 sudo[72427]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcjhrbzezdfqjnlflrulygcxivzgyajd ; /usr/bin/python3'
Jan 21 23:22:12 compute-0 sudo[72427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:12 compute-0 python3[72429]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:22:12 compute-0 systemd[1]: Reloading.
Jan 21 23:22:12 compute-0 systemd-sysv-generator[72461]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:22:12 compute-0 systemd-rc-local-generator[72458]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:22:12 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 21 23:22:12 compute-0 bash[72468]: /dev/loop3: [64513]:4328477 (/var/lib/ceph-osd-0.img)
Jan 21 23:22:12 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 21 23:22:13 compute-0 sudo[72427]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:13 compute-0 lvm[72470]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 23:22:13 compute-0 lvm[72470]: VG ceph_vg0 finished
Jan 21 23:22:16 compute-0 python3[72495]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:22:18 compute-0 sudo[72586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulwqpfiqxsrziamseilqykbdcfjwpxul ; /usr/bin/python3'
Jan 21 23:22:18 compute-0 sudo[72586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:19 compute-0 python3[72588]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 23:22:20 compute-0 groupadd[72594]: group added to /etc/group: name=cephadm, GID=993
Jan 21 23:22:20 compute-0 groupadd[72594]: group added to /etc/gshadow: name=cephadm
Jan 21 23:22:20 compute-0 groupadd[72594]: new group: name=cephadm, GID=993
Jan 21 23:22:20 compute-0 useradd[72601]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Jan 21 23:22:20 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 23:22:20 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 23:22:20 compute-0 sudo[72586]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:21 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 23:22:21 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 23:22:21 compute-0 systemd[1]: run-r9302e259240745e4b17cbdc8c823a235.service: Deactivated successfully.
Jan 21 23:22:21 compute-0 sudo[72697]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npjbpwgsfbdbefnnrklchgteuznwfkav ; /usr/bin/python3'
Jan 21 23:22:21 compute-0 sudo[72697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:21 compute-0 python3[72699]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 23:22:21 compute-0 sudo[72697]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:21 compute-0 sudo[72725]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qacpvcatoafktefcuamuywbauwlbkeln ; /usr/bin/python3'
Jan 21 23:22:21 compute-0 sudo[72725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:21 compute-0 python3[72727]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:22:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 23:22:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 23:22:22 compute-0 sudo[72725]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:22 compute-0 sudo[72790]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqmwpetjqnxkrhgasbstgcxrjwbbmxry ; /usr/bin/python3'
Jan 21 23:22:22 compute-0 sudo[72790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:22 compute-0 python3[72792]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:22:22 compute-0 sudo[72790]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:22 compute-0 sudo[72816]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upzrujmlimqdszvtqxxfdreuxaodexxj ; /usr/bin/python3'
Jan 21 23:22:22 compute-0 sudo[72816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 23:22:22 compute-0 python3[72818]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:22:22 compute-0 sudo[72816]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:23 compute-0 sudo[72894]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpetoygecibckaddviihqdoogfsktube ; /usr/bin/python3'
Jan 21 23:22:23 compute-0 sudo[72894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:23 compute-0 python3[72896]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 23:22:23 compute-0 sudo[72894]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:23 compute-0 sudo[72967]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foiipjbkladxmukrxkgackzjnuzeetpf ; /usr/bin/python3'
Jan 21 23:22:23 compute-0 sudo[72967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:24 compute-0 python3[72969]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769037743.3854318-37193-133985120407960/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:22:24 compute-0 sudo[72967]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:24 compute-0 sudo[73069]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufpgieghomihndsaimsilpycnwefblga ; /usr/bin/python3'
Jan 21 23:22:24 compute-0 sudo[73069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:25 compute-0 python3[73071]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 23:22:25 compute-0 sudo[73069]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:25 compute-0 sudo[73142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdcttbraxobajcjoijieehwwetowvrkr ; /usr/bin/python3'
Jan 21 23:22:25 compute-0 sudo[73142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:25 compute-0 python3[73144]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769037744.6567302-37211-228104402388456/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:22:25 compute-0 sudo[73142]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:25 compute-0 sudo[73192]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fykqpwbsarnrnbthvqraabsxazratcte ; /usr/bin/python3'
Jan 21 23:22:25 compute-0 sudo[73192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:25 compute-0 python3[73194]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 23:22:25 compute-0 sudo[73192]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:26 compute-0 sudo[73220]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdufmbubhqhpzsmslhvqnwjorbjghqez ; /usr/bin/python3'
Jan 21 23:22:26 compute-0 sudo[73220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:26 compute-0 python3[73222]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 23:22:26 compute-0 sudo[73220]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:26 compute-0 sudo[73248]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lebevnqxsbynsyjmbuyrvstrlbvkwefh ; /usr/bin/python3'
Jan 21 23:22:26 compute-0 sudo[73248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:26 compute-0 python3[73250]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 23:22:26 compute-0 sudo[73248]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:27 compute-0 python3[73276]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 23:22:27 compute-0 sudo[73300]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzmzsuqxthgddqiaiikgddsqzmsayltd ; /usr/bin/python3'
Jan 21 23:22:27 compute-0 sudo[73300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:22:27 compute-0 python3[73302]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:22:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 23:22:27 compute-0 sshd-session[73318]: Accepted publickey for ceph-admin from 192.168.122.100 port 52168 ssh2: RSA SHA256:kW7AbEF6E9Zse/yjN6dVjvmzoqBwUgKYFkxqB1vmEmU
Jan 21 23:22:27 compute-0 systemd-logind[786]: New session 19 of user ceph-admin.
Jan 21 23:22:27 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 21 23:22:27 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 21 23:22:27 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 21 23:22:27 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 21 23:22:27 compute-0 systemd[73322]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 23:22:27 compute-0 systemd[73322]: Queued start job for default target Main User Target.
Jan 21 23:22:27 compute-0 systemd[73322]: Created slice User Application Slice.
Jan 21 23:22:27 compute-0 systemd[73322]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 21 23:22:27 compute-0 systemd[73322]: Started Daily Cleanup of User's Temporary Directories.
Jan 21 23:22:27 compute-0 systemd[73322]: Reached target Paths.
Jan 21 23:22:27 compute-0 systemd[73322]: Reached target Timers.
Jan 21 23:22:27 compute-0 systemd[73322]: Starting D-Bus User Message Bus Socket...
Jan 21 23:22:27 compute-0 systemd[73322]: Starting Create User's Volatile Files and Directories...
Jan 21 23:22:27 compute-0 systemd[73322]: Listening on D-Bus User Message Bus Socket.
Jan 21 23:22:27 compute-0 systemd[73322]: Reached target Sockets.
Jan 21 23:22:27 compute-0 systemd[73322]: Finished Create User's Volatile Files and Directories.
Jan 21 23:22:27 compute-0 systemd[73322]: Reached target Basic System.
Jan 21 23:22:27 compute-0 systemd[73322]: Reached target Main User Target.
Jan 21 23:22:27 compute-0 systemd[73322]: Startup finished in 131ms.
Jan 21 23:22:27 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 21 23:22:27 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Jan 21 23:22:27 compute-0 sshd-session[73318]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 23:22:28 compute-0 sudo[73339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Jan 21 23:22:28 compute-0 sudo[73339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:22:28 compute-0 sudo[73339]: pam_unix(sudo:session): session closed for user root
Jan 21 23:22:28 compute-0 sshd-session[73338]: Received disconnect from 192.168.122.100 port 52168:11: disconnected by user
Jan 21 23:22:28 compute-0 sshd-session[73338]: Disconnected from user ceph-admin 192.168.122.100 port 52168
Jan 21 23:22:28 compute-0 sshd-session[73318]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 21 23:22:28 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 21 23:22:28 compute-0 systemd-logind[786]: Session 19 logged out. Waiting for processes to exit.
Jan 21 23:22:28 compute-0 systemd-logind[786]: Removed session 19.
Jan 21 23:22:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2293674865-merged.mount: Deactivated successfully.
Jan 21 23:22:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2293674865-lower\x2dmapped.mount: Deactivated successfully.
Jan 21 23:22:38 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 21 23:22:39 compute-0 systemd[73322]: Activating special unit Exit the Session...
Jan 21 23:22:39 compute-0 systemd[73322]: Stopped target Main User Target.
Jan 21 23:22:39 compute-0 systemd[73322]: Stopped target Basic System.
Jan 21 23:22:39 compute-0 systemd[73322]: Stopped target Paths.
Jan 21 23:22:39 compute-0 systemd[73322]: Stopped target Sockets.
Jan 21 23:22:39 compute-0 systemd[73322]: Stopped target Timers.
Jan 21 23:22:39 compute-0 systemd[73322]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 21 23:22:39 compute-0 systemd[73322]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 21 23:22:39 compute-0 systemd[73322]: Closed D-Bus User Message Bus Socket.
Jan 21 23:22:39 compute-0 systemd[73322]: Stopped Create User's Volatile Files and Directories.
Jan 21 23:22:39 compute-0 systemd[73322]: Removed slice User Application Slice.
Jan 21 23:22:39 compute-0 systemd[73322]: Reached target Shutdown.
Jan 21 23:22:39 compute-0 systemd[73322]: Finished Exit the Session.
Jan 21 23:22:39 compute-0 systemd[73322]: Reached target Exit the Session.
Jan 21 23:22:39 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 21 23:22:39 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 21 23:22:39 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 21 23:22:39 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 21 23:22:39 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 21 23:22:39 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 21 23:22:39 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 21 23:22:45 compute-0 podman[73376]: 2026-01-21 23:22:45.090839231 +0000 UTC m=+16.979056619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:45 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 23:22:45 compute-0 podman[73437]: 2026-01-21 23:22:45.15796289 +0000 UTC m=+0.040513403 container create 6ca83950b6b6b02f7bb6d03e4d7917ed2fdf90f51707f7bf1db3750c5cb9c712 (image=quay.io/ceph/ceph:v18, name=lucid_hellman, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 23:22:45 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 21 23:22:45 compute-0 systemd[1]: Started libpod-conmon-6ca83950b6b6b02f7bb6d03e4d7917ed2fdf90f51707f7bf1db3750c5cb9c712.scope.
Jan 21 23:22:45 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:22:45 compute-0 podman[73437]: 2026-01-21 23:22:45.143356983 +0000 UTC m=+0.025907516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:45 compute-0 podman[73437]: 2026-01-21 23:22:45.276714664 +0000 UTC m=+0.159265267 container init 6ca83950b6b6b02f7bb6d03e4d7917ed2fdf90f51707f7bf1db3750c5cb9c712 (image=quay.io/ceph/ceph:v18, name=lucid_hellman, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:22:45 compute-0 podman[73437]: 2026-01-21 23:22:45.288177445 +0000 UTC m=+0.170727998 container start 6ca83950b6b6b02f7bb6d03e4d7917ed2fdf90f51707f7bf1db3750c5cb9c712 (image=quay.io/ceph/ceph:v18, name=lucid_hellman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 21 23:22:45 compute-0 podman[73437]: 2026-01-21 23:22:45.291884139 +0000 UTC m=+0.174434742 container attach 6ca83950b6b6b02f7bb6d03e4d7917ed2fdf90f51707f7bf1db3750c5cb9c712 (image=quay.io/ceph/ceph:v18, name=lucid_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:22:45 compute-0 lucid_hellman[73452]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 21 23:22:45 compute-0 systemd[1]: libpod-6ca83950b6b6b02f7bb6d03e4d7917ed2fdf90f51707f7bf1db3750c5cb9c712.scope: Deactivated successfully.
Jan 21 23:22:45 compute-0 podman[73437]: 2026-01-21 23:22:45.611103572 +0000 UTC m=+0.493654105 container died 6ca83950b6b6b02f7bb6d03e4d7917ed2fdf90f51707f7bf1db3750c5cb9c712 (image=quay.io/ceph/ceph:v18, name=lucid_hellman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 23:22:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f95b472ca5d807fbd78dfccb5fc2a1c64061503c3cfddfa629e7623e3492bd3b-merged.mount: Deactivated successfully.
Jan 21 23:22:45 compute-0 podman[73437]: 2026-01-21 23:22:45.651935104 +0000 UTC m=+0.534485617 container remove 6ca83950b6b6b02f7bb6d03e4d7917ed2fdf90f51707f7bf1db3750c5cb9c712 (image=quay.io/ceph/ceph:v18, name=lucid_hellman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 23:22:45 compute-0 systemd[1]: libpod-conmon-6ca83950b6b6b02f7bb6d03e4d7917ed2fdf90f51707f7bf1db3750c5cb9c712.scope: Deactivated successfully.
Jan 21 23:22:45 compute-0 podman[73470]: 2026-01-21 23:22:45.719875058 +0000 UTC m=+0.048892451 container create 3a7e106ccc1cdcd624bd317e60e02091c52b6557ed39e4a32da65e5da54e9f41 (image=quay.io/ceph/ceph:v18, name=goofy_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 23:22:45 compute-0 systemd[1]: Started libpod-conmon-3a7e106ccc1cdcd624bd317e60e02091c52b6557ed39e4a32da65e5da54e9f41.scope.
Jan 21 23:22:45 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:22:45 compute-0 podman[73470]: 2026-01-21 23:22:45.786912985 +0000 UTC m=+0.115930418 container init 3a7e106ccc1cdcd624bd317e60e02091c52b6557ed39e4a32da65e5da54e9f41 (image=quay.io/ceph/ceph:v18, name=goofy_jackson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 21 23:22:45 compute-0 podman[73470]: 2026-01-21 23:22:45.692487849 +0000 UTC m=+0.021505282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:45 compute-0 podman[73470]: 2026-01-21 23:22:45.79752721 +0000 UTC m=+0.126544603 container start 3a7e106ccc1cdcd624bd317e60e02091c52b6557ed39e4a32da65e5da54e9f41 (image=quay.io/ceph/ceph:v18, name=goofy_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 21 23:22:45 compute-0 goofy_jackson[73487]: 167 167
Jan 21 23:22:45 compute-0 podman[73470]: 2026-01-21 23:22:45.802149172 +0000 UTC m=+0.131166605 container attach 3a7e106ccc1cdcd624bd317e60e02091c52b6557ed39e4a32da65e5da54e9f41 (image=quay.io/ceph/ceph:v18, name=goofy_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 21 23:22:45 compute-0 systemd[1]: libpod-3a7e106ccc1cdcd624bd317e60e02091c52b6557ed39e4a32da65e5da54e9f41.scope: Deactivated successfully.
Jan 21 23:22:45 compute-0 podman[73492]: 2026-01-21 23:22:45.839650712 +0000 UTC m=+0.026797862 container died 3a7e106ccc1cdcd624bd317e60e02091c52b6557ed39e4a32da65e5da54e9f41 (image=quay.io/ceph/ceph:v18, name=goofy_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 21 23:22:45 compute-0 podman[73492]: 2026-01-21 23:22:45.873656586 +0000 UTC m=+0.060803666 container remove 3a7e106ccc1cdcd624bd317e60e02091c52b6557ed39e4a32da65e5da54e9f41 (image=quay.io/ceph/ceph:v18, name=goofy_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:22:45 compute-0 systemd[1]: libpod-conmon-3a7e106ccc1cdcd624bd317e60e02091c52b6557ed39e4a32da65e5da54e9f41.scope: Deactivated successfully.
Jan 21 23:22:45 compute-0 podman[73507]: 2026-01-21 23:22:45.950706179 +0000 UTC m=+0.046555289 container create 777f317daa05788fbc3483de2c5a1ce91c37073aacc58dee68a198488ec70a65 (image=quay.io/ceph/ceph:v18, name=epic_bhabha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:22:45 compute-0 systemd[1]: Started libpod-conmon-777f317daa05788fbc3483de2c5a1ce91c37073aacc58dee68a198488ec70a65.scope.
Jan 21 23:22:46 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:22:46 compute-0 podman[73507]: 2026-01-21 23:22:45.929791128 +0000 UTC m=+0.025640228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:46 compute-0 podman[73507]: 2026-01-21 23:22:46.035084547 +0000 UTC m=+0.130933657 container init 777f317daa05788fbc3483de2c5a1ce91c37073aacc58dee68a198488ec70a65 (image=quay.io/ceph/ceph:v18, name=epic_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:22:46 compute-0 podman[73507]: 2026-01-21 23:22:46.041079721 +0000 UTC m=+0.136928841 container start 777f317daa05788fbc3483de2c5a1ce91c37073aacc58dee68a198488ec70a65 (image=quay.io/ceph/ceph:v18, name=epic_bhabha, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 21 23:22:46 compute-0 podman[73507]: 2026-01-21 23:22:46.045064314 +0000 UTC m=+0.140913414 container attach 777f317daa05788fbc3483de2c5a1ce91c37073aacc58dee68a198488ec70a65 (image=quay.io/ceph/ceph:v18, name=epic_bhabha, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 21 23:22:46 compute-0 epic_bhabha[73523]: AQDGX3FpGOWyBBAAtHtwRIre6tz8QKC5+c+2vA==
Jan 21 23:22:46 compute-0 systemd[1]: libpod-777f317daa05788fbc3483de2c5a1ce91c37073aacc58dee68a198488ec70a65.scope: Deactivated successfully.
Jan 21 23:22:46 compute-0 podman[73507]: 2026-01-21 23:22:46.084451152 +0000 UTC m=+0.180300232 container died 777f317daa05788fbc3483de2c5a1ce91c37073aacc58dee68a198488ec70a65 (image=quay.io/ceph/ceph:v18, name=epic_bhabha, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Jan 21 23:22:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-393925f1d60e76efa4883f4d00dd3cf552f6ae254dc6139cc8ce903b9d3bf2f0-merged.mount: Deactivated successfully.
Jan 21 23:22:46 compute-0 podman[73507]: 2026-01-21 23:22:46.122102147 +0000 UTC m=+0.217951227 container remove 777f317daa05788fbc3483de2c5a1ce91c37073aacc58dee68a198488ec70a65 (image=quay.io/ceph/ceph:v18, name=epic_bhabha, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:22:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 23:22:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 23:22:46 compute-0 systemd[1]: libpod-conmon-777f317daa05788fbc3483de2c5a1ce91c37073aacc58dee68a198488ec70a65.scope: Deactivated successfully.
Jan 21 23:22:46 compute-0 podman[73543]: 2026-01-21 23:22:46.202318377 +0000 UTC m=+0.055953807 container create 3b2de37c522ced3a76ede9bd5294ac22f5405757ae2c70ce398d69ff9459bbf2 (image=quay.io/ceph/ceph:v18, name=hardcore_archimedes, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 21 23:22:46 compute-0 systemd[1]: Started libpod-conmon-3b2de37c522ced3a76ede9bd5294ac22f5405757ae2c70ce398d69ff9459bbf2.scope.
Jan 21 23:22:46 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:22:46 compute-0 podman[73543]: 2026-01-21 23:22:46.185905124 +0000 UTC m=+0.039540584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:46 compute-0 podman[73543]: 2026-01-21 23:22:46.286400776 +0000 UTC m=+0.140036246 container init 3b2de37c522ced3a76ede9bd5294ac22f5405757ae2c70ce398d69ff9459bbf2 (image=quay.io/ceph/ceph:v18, name=hardcore_archimedes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 23:22:46 compute-0 podman[73543]: 2026-01-21 23:22:46.290985338 +0000 UTC m=+0.144620788 container start 3b2de37c522ced3a76ede9bd5294ac22f5405757ae2c70ce398d69ff9459bbf2 (image=quay.io/ceph/ceph:v18, name=hardcore_archimedes, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:22:46 compute-0 podman[73543]: 2026-01-21 23:22:46.293829455 +0000 UTC m=+0.147464895 container attach 3b2de37c522ced3a76ede9bd5294ac22f5405757ae2c70ce398d69ff9459bbf2 (image=quay.io/ceph/ceph:v18, name=hardcore_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:22:46 compute-0 hardcore_archimedes[73560]: AQDGX3FpU9ZdExAAKcRI3gNuehRbs5xLYBPXOQ==
Jan 21 23:22:46 compute-0 systemd[1]: libpod-3b2de37c522ced3a76ede9bd5294ac22f5405757ae2c70ce398d69ff9459bbf2.scope: Deactivated successfully.
Jan 21 23:22:46 compute-0 podman[73543]: 2026-01-21 23:22:46.329800568 +0000 UTC m=+0.183436018 container died 3b2de37c522ced3a76ede9bd5294ac22f5405757ae2c70ce398d69ff9459bbf2 (image=quay.io/ceph/ceph:v18, name=hardcore_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Jan 21 23:22:46 compute-0 podman[73543]: 2026-01-21 23:22:46.374142538 +0000 UTC m=+0.227778008 container remove 3b2de37c522ced3a76ede9bd5294ac22f5405757ae2c70ce398d69ff9459bbf2 (image=quay.io/ceph/ceph:v18, name=hardcore_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 21 23:22:46 compute-0 systemd[1]: libpod-conmon-3b2de37c522ced3a76ede9bd5294ac22f5405757ae2c70ce398d69ff9459bbf2.scope: Deactivated successfully.
Jan 21 23:22:46 compute-0 podman[73578]: 2026-01-21 23:22:46.516701351 +0000 UTC m=+0.114163342 container create d7e035761bd64d08a9bfeac5846441b6c51ab53ecc6ea886bd292255f19a55f3 (image=quay.io/ceph/ceph:v18, name=great_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 21 23:22:46 compute-0 podman[73578]: 2026-01-21 23:22:46.433617533 +0000 UTC m=+0.031079564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:46 compute-0 systemd[1]: Started libpod-conmon-d7e035761bd64d08a9bfeac5846441b6c51ab53ecc6ea886bd292255f19a55f3.scope.
Jan 21 23:22:46 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:22:46 compute-0 podman[73578]: 2026-01-21 23:22:46.611345615 +0000 UTC m=+0.208807686 container init d7e035761bd64d08a9bfeac5846441b6c51ab53ecc6ea886bd292255f19a55f3 (image=quay.io/ceph/ceph:v18, name=great_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 21 23:22:46 compute-0 podman[73578]: 2026-01-21 23:22:46.621433445 +0000 UTC m=+0.218895466 container start d7e035761bd64d08a9bfeac5846441b6c51ab53ecc6ea886bd292255f19a55f3 (image=quay.io/ceph/ceph:v18, name=great_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 21 23:22:46 compute-0 podman[73578]: 2026-01-21 23:22:46.625742896 +0000 UTC m=+0.223204957 container attach d7e035761bd64d08a9bfeac5846441b6c51ab53ecc6ea886bd292255f19a55f3 (image=quay.io/ceph/ceph:v18, name=great_franklin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:22:46 compute-0 great_franklin[73596]: AQDGX3FpgQeqJxAAS0sEjSFw2hKIPN+tCFPwdw==
Jan 21 23:22:46 compute-0 systemd[1]: libpod-d7e035761bd64d08a9bfeac5846441b6c51ab53ecc6ea886bd292255f19a55f3.scope: Deactivated successfully.
Jan 21 23:22:46 compute-0 podman[73578]: 2026-01-21 23:22:46.671012885 +0000 UTC m=+0.268474896 container died d7e035761bd64d08a9bfeac5846441b6c51ab53ecc6ea886bd292255f19a55f3 (image=quay.io/ceph/ceph:v18, name=great_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 23:22:46 compute-0 podman[73578]: 2026-01-21 23:22:46.715008235 +0000 UTC m=+0.312470206 container remove d7e035761bd64d08a9bfeac5846441b6c51ab53ecc6ea886bd292255f19a55f3 (image=quay.io/ceph/ceph:v18, name=great_franklin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:22:46 compute-0 systemd[1]: libpod-conmon-d7e035761bd64d08a9bfeac5846441b6c51ab53ecc6ea886bd292255f19a55f3.scope: Deactivated successfully.
Jan 21 23:22:46 compute-0 podman[73616]: 2026-01-21 23:22:46.805763789 +0000 UTC m=+0.057053062 container create 85263a373948ed00b4fe1a9687c2ac47b6e8d2ff8517c5066829a49296122871 (image=quay.io/ceph/ceph:v18, name=pedantic_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 23:22:46 compute-0 systemd[1]: Started libpod-conmon-85263a373948ed00b4fe1a9687c2ac47b6e8d2ff8517c5066829a49296122871.scope.
Jan 21 23:22:46 compute-0 podman[73616]: 2026-01-21 23:22:46.784538937 +0000 UTC m=+0.035828210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:46 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f970b5b6b5a2ea5554b4303500bde3057162512e9299de41326ef005d05900c0/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:46 compute-0 podman[73616]: 2026-01-21 23:22:46.908462309 +0000 UTC m=+0.159751642 container init 85263a373948ed00b4fe1a9687c2ac47b6e8d2ff8517c5066829a49296122871 (image=quay.io/ceph/ceph:v18, name=pedantic_newton, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 21 23:22:46 compute-0 podman[73616]: 2026-01-21 23:22:46.917366382 +0000 UTC m=+0.168655665 container start 85263a373948ed00b4fe1a9687c2ac47b6e8d2ff8517c5066829a49296122871 (image=quay.io/ceph/ceph:v18, name=pedantic_newton, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 23:22:46 compute-0 podman[73616]: 2026-01-21 23:22:46.922794829 +0000 UTC m=+0.174084162 container attach 85263a373948ed00b4fe1a9687c2ac47b6e8d2ff8517c5066829a49296122871 (image=quay.io/ceph/ceph:v18, name=pedantic_newton, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 23:22:46 compute-0 pedantic_newton[73632]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 21 23:22:46 compute-0 pedantic_newton[73632]: setting min_mon_release = pacific
Jan 21 23:22:46 compute-0 pedantic_newton[73632]: /usr/bin/monmaptool: set fsid to 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:22:46 compute-0 pedantic_newton[73632]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 21 23:22:46 compute-0 systemd[1]: libpod-85263a373948ed00b4fe1a9687c2ac47b6e8d2ff8517c5066829a49296122871.scope: Deactivated successfully.
Jan 21 23:22:47 compute-0 podman[73639]: 2026-01-21 23:22:47.011620783 +0000 UTC m=+0.029652370 container died 85263a373948ed00b4fe1a9687c2ac47b6e8d2ff8517c5066829a49296122871 (image=quay.io/ceph/ceph:v18, name=pedantic_newton, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:22:47 compute-0 podman[73639]: 2026-01-21 23:22:47.053408585 +0000 UTC m=+0.071440112 container remove 85263a373948ed00b4fe1a9687c2ac47b6e8d2ff8517c5066829a49296122871 (image=quay.io/ceph/ceph:v18, name=pedantic_newton, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:22:47 compute-0 systemd[1]: libpod-conmon-85263a373948ed00b4fe1a9687c2ac47b6e8d2ff8517c5066829a49296122871.scope: Deactivated successfully.
Jan 21 23:22:47 compute-0 podman[73654]: 2026-01-21 23:22:47.161662547 +0000 UTC m=+0.067509232 container create ed8e97f170a51d7229f87fad836d4fdc074727a8bbfa2950d63100cba4baf48f (image=quay.io/ceph/ceph:v18, name=eager_burnell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 21 23:22:47 compute-0 systemd[1]: Started libpod-conmon-ed8e97f170a51d7229f87fad836d4fdc074727a8bbfa2950d63100cba4baf48f.scope.
Jan 21 23:22:47 compute-0 podman[73654]: 2026-01-21 23:22:47.132346367 +0000 UTC m=+0.038193092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:47 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83719a71067a9b898d27992890c868dd035af0f376cdba3c5a5384b734ff0259/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83719a71067a9b898d27992890c868dd035af0f376cdba3c5a5384b734ff0259/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83719a71067a9b898d27992890c868dd035af0f376cdba3c5a5384b734ff0259/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83719a71067a9b898d27992890c868dd035af0f376cdba3c5a5384b734ff0259/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:47 compute-0 podman[73654]: 2026-01-21 23:22:47.26936728 +0000 UTC m=+0.175214005 container init ed8e97f170a51d7229f87fad836d4fdc074727a8bbfa2950d63100cba4baf48f (image=quay.io/ceph/ceph:v18, name=eager_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:22:47 compute-0 podman[73654]: 2026-01-21 23:22:47.282525924 +0000 UTC m=+0.188372609 container start ed8e97f170a51d7229f87fad836d4fdc074727a8bbfa2950d63100cba4baf48f (image=quay.io/ceph/ceph:v18, name=eager_burnell, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:22:47 compute-0 podman[73654]: 2026-01-21 23:22:47.286127374 +0000 UTC m=+0.191974109 container attach ed8e97f170a51d7229f87fad836d4fdc074727a8bbfa2950d63100cba4baf48f (image=quay.io/ceph/ceph:v18, name=eager_burnell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 23:22:47 compute-0 systemd[1]: libpod-ed8e97f170a51d7229f87fad836d4fdc074727a8bbfa2950d63100cba4baf48f.scope: Deactivated successfully.
Jan 21 23:22:47 compute-0 podman[73654]: 2026-01-21 23:22:47.38343405 +0000 UTC m=+0.289280755 container died ed8e97f170a51d7229f87fad836d4fdc074727a8bbfa2950d63100cba4baf48f (image=quay.io/ceph/ceph:v18, name=eager_burnell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:22:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-83719a71067a9b898d27992890c868dd035af0f376cdba3c5a5384b734ff0259-merged.mount: Deactivated successfully.
Jan 21 23:22:47 compute-0 podman[73654]: 2026-01-21 23:22:47.434285399 +0000 UTC m=+0.340132054 container remove ed8e97f170a51d7229f87fad836d4fdc074727a8bbfa2950d63100cba4baf48f (image=quay.io/ceph/ceph:v18, name=eager_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 21 23:22:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 23:22:47 compute-0 systemd[1]: libpod-conmon-ed8e97f170a51d7229f87fad836d4fdc074727a8bbfa2950d63100cba4baf48f.scope: Deactivated successfully.
Jan 21 23:22:47 compute-0 systemd[1]: Reloading.
Jan 21 23:22:47 compute-0 systemd-rc-local-generator[73730]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:22:47 compute-0 systemd-sysv-generator[73733]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:22:47 compute-0 systemd[1]: Reloading.
Jan 21 23:22:47 compute-0 systemd-sysv-generator[73776]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:22:47 compute-0 systemd-rc-local-generator[73772]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:22:47 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Jan 21 23:22:47 compute-0 systemd[1]: Reloading.
Jan 21 23:22:48 compute-0 systemd-rc-local-generator[73808]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:22:48 compute-0 systemd-sysv-generator[73813]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:22:48 compute-0 systemd[1]: Reached target Ceph cluster 3759241a-7f1c-520d-ba17-879943ee2f00.
Jan 21 23:22:48 compute-0 systemd[1]: Reloading.
Jan 21 23:22:48 compute-0 systemd-sysv-generator[73854]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:22:48 compute-0 systemd-rc-local-generator[73851]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:22:48 compute-0 systemd[1]: Reloading.
Jan 21 23:22:48 compute-0 systemd-rc-local-generator[73887]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:22:48 compute-0 systemd-sysv-generator[73890]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:22:48 compute-0 systemd[1]: Created slice Slice /system/ceph-3759241a-7f1c-520d-ba17-879943ee2f00.
Jan 21 23:22:48 compute-0 systemd[1]: Reached target System Time Set.
Jan 21 23:22:48 compute-0 systemd[1]: Reached target System Time Synchronized.
Jan 21 23:22:48 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 3759241a-7f1c-520d-ba17-879943ee2f00...
Jan 21 23:22:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 23:22:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 23:22:49 compute-0 podman[73943]: 2026-01-21 23:22:49.060242577 +0000 UTC m=+0.047374484 container create fb71e23b8cf79eded8c06f2bd8ba7b1b542dec43e0033e5cd2671913a0182c07 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f567af4313728c14f3e90cc1f9a38527cd7a1e461c7fc4bcc171148a3a78652b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f567af4313728c14f3e90cc1f9a38527cd7a1e461c7fc4bcc171148a3a78652b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f567af4313728c14f3e90cc1f9a38527cd7a1e461c7fc4bcc171148a3a78652b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f567af4313728c14f3e90cc1f9a38527cd7a1e461c7fc4bcc171148a3a78652b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:49 compute-0 podman[73943]: 2026-01-21 23:22:49.038885552 +0000 UTC m=+0.026017449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:49 compute-0 podman[73943]: 2026-01-21 23:22:49.134959349 +0000 UTC m=+0.122091256 container init fb71e23b8cf79eded8c06f2bd8ba7b1b542dec43e0033e5cd2671913a0182c07 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Jan 21 23:22:49 compute-0 podman[73943]: 2026-01-21 23:22:49.141524981 +0000 UTC m=+0.128656888 container start fb71e23b8cf79eded8c06f2bd8ba7b1b542dec43e0033e5cd2671913a0182c07 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:22:49 compute-0 bash[73943]: fb71e23b8cf79eded8c06f2bd8ba7b1b542dec43e0033e5cd2671913a0182c07
Jan 21 23:22:49 compute-0 systemd[1]: Started Ceph mon.compute-0 for 3759241a-7f1c-520d-ba17-879943ee2f00.
Jan 21 23:22:49 compute-0 ceph-mon[73963]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 23:22:49 compute-0 ceph-mon[73963]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 21 23:22:49 compute-0 ceph-mon[73963]: pidfile_write: ignore empty --pid-file
Jan 21 23:22:49 compute-0 ceph-mon[73963]: load: jerasure load: lrc 
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: RocksDB version: 7.9.2
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Git sha 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: DB SUMMARY
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: DB Session ID:  YQJBM9LLVBGKTK0RJ124
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: CURRENT file:  CURRENT
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                         Options.error_if_exists: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                       Options.create_if_missing: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                                     Options.env: 0x561f57a83c40
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                                Options.info_log: 0x561f59bcaec0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                              Options.statistics: (nil)
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                               Options.use_fsync: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                              Options.db_log_dir: 
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                                 Options.wal_dir: 
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                    Options.write_buffer_manager: 0x561f59bdab40
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                  Options.unordered_write: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                               Options.row_cache: None
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                              Options.wal_filter: None
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.two_write_queues: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.wal_compression: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.atomic_flush: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.max_background_jobs: 2
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.max_background_compactions: -1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.max_subcompactions: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.max_total_wal_size: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                          Options.max_open_files: -1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:       Options.compaction_readahead_size: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Compression algorithms supported:
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         kZSTD supported: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         kXpressCompression supported: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         kBZip2Compression supported: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         kLZ4Compression supported: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         kZlibCompression supported: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         kLZ4HCCompression supported: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         kSnappyCompression supported: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:           Options.merge_operator: 
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:        Options.compaction_filter: None
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f59bcaaa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561f59bc31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:        Options.write_buffer_size: 33554432
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:  Options.max_write_buffer_number: 2
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:          Options.compression: NoCompression
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.num_levels: 7
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 756e4229-f67c-4e5b-91a0-5975df843718
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769037769205337, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769037769208049, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "YQJBM9LLVBGKTK0RJ124", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769037769208219, "job": 1, "event": "recovery_finished"}
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561f59bece00
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: DB pointer 0x561f59cf6000
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 23:22:49 compute-0 ceph-mon[73963]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561f59bc31f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 21 23:22:49 compute-0 ceph-mon[73963]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@-1(???) e0 preinit fsid 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 21 23:22:49 compute-0 ceph-mon[73963]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 23:22:49 compute-0 ceph-mon[73963]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 21 23:22:49 compute-0 ceph-mon[73963]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 23:22:49 compute-0 ceph-mon[73963]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 23:22:49 compute-0 podman[73964]: 2026-01-21 23:22:49.24223058 +0000 UTC m=+0.057216566 container create eb44a67b2a2659c05440ebdc2349453411fce5a88280b83fa9727c247159c58f (image=quay.io/ceph/ceph:v18, name=thirsty_lovelace, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 23:22:49 compute-0 ceph-mon[73963]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2026-01-21T23:22:47.324156Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864308,os=Linux}
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).mds e1 new map
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 21 23:22:49 compute-0 ceph-mon[73963]: log_channel(cluster) log [DBG] : fsmap 
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mkfs 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 21 23:22:49 compute-0 ceph-mon[73963]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 21 23:22:49 compute-0 ceph-mon[73963]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 23:22:49 compute-0 systemd[1]: Started libpod-conmon-eb44a67b2a2659c05440ebdc2349453411fce5a88280b83fa9727c247159c58f.scope.
Jan 21 23:22:49 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:22:49 compute-0 podman[73964]: 2026-01-21 23:22:49.224187046 +0000 UTC m=+0.039173052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bedd5565657b89586d7adf1b86dfcc482baa00839a11b10940407f19bad5d17/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bedd5565657b89586d7adf1b86dfcc482baa00839a11b10940407f19bad5d17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bedd5565657b89586d7adf1b86dfcc482baa00839a11b10940407f19bad5d17/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:49 compute-0 podman[73964]: 2026-01-21 23:22:49.339303988 +0000 UTC m=+0.154289994 container init eb44a67b2a2659c05440ebdc2349453411fce5a88280b83fa9727c247159c58f (image=quay.io/ceph/ceph:v18, name=thirsty_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:22:49 compute-0 podman[73964]: 2026-01-21 23:22:49.347824819 +0000 UTC m=+0.162810815 container start eb44a67b2a2659c05440ebdc2349453411fce5a88280b83fa9727c247159c58f (image=quay.io/ceph/ceph:v18, name=thirsty_lovelace, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 21 23:22:49 compute-0 podman[73964]: 2026-01-21 23:22:49.35142202 +0000 UTC m=+0.166408036 container attach eb44a67b2a2659c05440ebdc2349453411fce5a88280b83fa9727c247159c58f (image=quay.io/ceph/ceph:v18, name=thirsty_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:22:49 compute-0 ceph-mon[73963]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 21 23:22:49 compute-0 ceph-mon[73963]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/969913569' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 21 23:22:49 compute-0 thirsty_lovelace[74019]:   cluster:
Jan 21 23:22:49 compute-0 thirsty_lovelace[74019]:     id:     3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:22:49 compute-0 thirsty_lovelace[74019]:     health: HEALTH_OK
Jan 21 23:22:49 compute-0 thirsty_lovelace[74019]:  
Jan 21 23:22:49 compute-0 thirsty_lovelace[74019]:   services:
Jan 21 23:22:49 compute-0 thirsty_lovelace[74019]:     mon: 1 daemons, quorum compute-0 (age 0.513405s)
Jan 21 23:22:49 compute-0 thirsty_lovelace[74019]:     mgr: no daemons active
Jan 21 23:22:49 compute-0 thirsty_lovelace[74019]:     osd: 0 osds: 0 up, 0 in
Jan 21 23:22:49 compute-0 thirsty_lovelace[74019]:  
Jan 21 23:22:49 compute-0 thirsty_lovelace[74019]:   data:
Jan 21 23:22:49 compute-0 thirsty_lovelace[74019]:     pools:   0 pools, 0 pgs
Jan 21 23:22:49 compute-0 thirsty_lovelace[74019]:     objects: 0 objects, 0 B
Jan 21 23:22:49 compute-0 thirsty_lovelace[74019]:     usage:   0 B used, 0 B / 0 B avail
Jan 21 23:22:49 compute-0 thirsty_lovelace[74019]:     pgs:     
Jan 21 23:22:49 compute-0 thirsty_lovelace[74019]:  
Jan 21 23:22:49 compute-0 systemd[1]: libpod-eb44a67b2a2659c05440ebdc2349453411fce5a88280b83fa9727c247159c58f.scope: Deactivated successfully.
Jan 21 23:22:49 compute-0 podman[73964]: 2026-01-21 23:22:49.770791214 +0000 UTC m=+0.585777230 container died eb44a67b2a2659c05440ebdc2349453411fce5a88280b83fa9727c247159c58f (image=quay.io/ceph/ceph:v18, name=thirsty_lovelace, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:22:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bedd5565657b89586d7adf1b86dfcc482baa00839a11b10940407f19bad5d17-merged.mount: Deactivated successfully.
Jan 21 23:22:49 compute-0 podman[73964]: 2026-01-21 23:22:49.830230158 +0000 UTC m=+0.645216184 container remove eb44a67b2a2659c05440ebdc2349453411fce5a88280b83fa9727c247159c58f (image=quay.io/ceph/ceph:v18, name=thirsty_lovelace, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 21 23:22:49 compute-0 systemd[1]: libpod-conmon-eb44a67b2a2659c05440ebdc2349453411fce5a88280b83fa9727c247159c58f.scope: Deactivated successfully.
Jan 21 23:22:49 compute-0 podman[74058]: 2026-01-21 23:22:49.893520489 +0000 UTC m=+0.040227596 container create 6180edada0811249f7ac3e7f227289568a44ddade961a48d0833743c97f14adf (image=quay.io/ceph/ceph:v18, name=recursing_euler, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 23:22:49 compute-0 systemd[1]: Started libpod-conmon-6180edada0811249f7ac3e7f227289568a44ddade961a48d0833743c97f14adf.scope.
Jan 21 23:22:49 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:22:49 compute-0 podman[74058]: 2026-01-21 23:22:49.875591168 +0000 UTC m=+0.022298265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52bbe2b42ed09d6910094b693d18ca8b91bd604f85e679aa5038f1c462fd8513/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52bbe2b42ed09d6910094b693d18ca8b91bd604f85e679aa5038f1c462fd8513/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52bbe2b42ed09d6910094b693d18ca8b91bd604f85e679aa5038f1c462fd8513/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52bbe2b42ed09d6910094b693d18ca8b91bd604f85e679aa5038f1c462fd8513/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:49 compute-0 podman[74058]: 2026-01-21 23:22:49.992192006 +0000 UTC m=+0.138899103 container init 6180edada0811249f7ac3e7f227289568a44ddade961a48d0833743c97f14adf (image=quay.io/ceph/ceph:v18, name=recursing_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:22:50 compute-0 podman[74058]: 2026-01-21 23:22:50.007642379 +0000 UTC m=+0.154349496 container start 6180edada0811249f7ac3e7f227289568a44ddade961a48d0833743c97f14adf (image=quay.io/ceph/ceph:v18, name=recursing_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 21 23:22:50 compute-0 podman[74058]: 2026-01-21 23:22:50.015400267 +0000 UTC m=+0.162107374 container attach 6180edada0811249f7ac3e7f227289568a44ddade961a48d0833743c97f14adf (image=quay.io/ceph/ceph:v18, name=recursing_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:22:50 compute-0 ceph-mon[73963]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 23:22:50 compute-0 ceph-mon[73963]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 21 23:22:50 compute-0 ceph-mon[73963]: fsmap 
Jan 21 23:22:50 compute-0 ceph-mon[73963]: osdmap e1: 0 total, 0 up, 0 in
Jan 21 23:22:50 compute-0 ceph-mon[73963]: mgrmap e1: no daemons active
Jan 21 23:22:50 compute-0 ceph-mon[73963]: from='client.? 192.168.122.100:0/969913569' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 21 23:22:50 compute-0 ceph-mon[73963]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 21 23:22:50 compute-0 ceph-mon[73963]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4240134042' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 21 23:22:50 compute-0 ceph-mon[73963]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4240134042' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 21 23:22:50 compute-0 recursing_euler[74074]: 
Jan 21 23:22:50 compute-0 recursing_euler[74074]: [global]
Jan 21 23:22:50 compute-0 recursing_euler[74074]:         fsid = 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:22:50 compute-0 recursing_euler[74074]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 21 23:22:50 compute-0 systemd[1]: libpod-6180edada0811249f7ac3e7f227289568a44ddade961a48d0833743c97f14adf.scope: Deactivated successfully.
Jan 21 23:22:50 compute-0 conmon[74074]: conmon 6180edada0811249f7ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6180edada0811249f7ac3e7f227289568a44ddade961a48d0833743c97f14adf.scope/container/memory.events
Jan 21 23:22:50 compute-0 podman[74058]: 2026-01-21 23:22:50.420056401 +0000 UTC m=+0.566763508 container died 6180edada0811249f7ac3e7f227289568a44ddade961a48d0833743c97f14adf (image=quay.io/ceph/ceph:v18, name=recursing_euler, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 21 23:22:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-52bbe2b42ed09d6910094b693d18ca8b91bd604f85e679aa5038f1c462fd8513-merged.mount: Deactivated successfully.
Jan 21 23:22:50 compute-0 podman[74058]: 2026-01-21 23:22:50.483184547 +0000 UTC m=+0.629891634 container remove 6180edada0811249f7ac3e7f227289568a44ddade961a48d0833743c97f14adf (image=quay.io/ceph/ceph:v18, name=recursing_euler, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 23:22:50 compute-0 systemd[1]: libpod-conmon-6180edada0811249f7ac3e7f227289568a44ddade961a48d0833743c97f14adf.scope: Deactivated successfully.
Jan 21 23:22:50 compute-0 podman[74111]: 2026-01-21 23:22:50.564714898 +0000 UTC m=+0.056965298 container create 3f71a0e9f90cfcd55b752dee48bec3b43eb87a6971046946f583e094cf907dea (image=quay.io/ceph/ceph:v18, name=eloquent_hoover, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 21 23:22:50 compute-0 systemd[1]: Started libpod-conmon-3f71a0e9f90cfcd55b752dee48bec3b43eb87a6971046946f583e094cf907dea.scope.
Jan 21 23:22:50 compute-0 podman[74111]: 2026-01-21 23:22:50.533689976 +0000 UTC m=+0.025940396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:50 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:22:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b889990dee903b0e694a5fe434705d9e976bea8085d6c7e4173793561f52b977/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b889990dee903b0e694a5fe434705d9e976bea8085d6c7e4173793561f52b977/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b889990dee903b0e694a5fe434705d9e976bea8085d6c7e4173793561f52b977/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b889990dee903b0e694a5fe434705d9e976bea8085d6c7e4173793561f52b977/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:50 compute-0 podman[74111]: 2026-01-21 23:22:50.661430005 +0000 UTC m=+0.153680415 container init 3f71a0e9f90cfcd55b752dee48bec3b43eb87a6971046946f583e094cf907dea (image=quay.io/ceph/ceph:v18, name=eloquent_hoover, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 23:22:50 compute-0 podman[74111]: 2026-01-21 23:22:50.675832287 +0000 UTC m=+0.168082667 container start 3f71a0e9f90cfcd55b752dee48bec3b43eb87a6971046946f583e094cf907dea (image=quay.io/ceph/ceph:v18, name=eloquent_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:22:50 compute-0 podman[74111]: 2026-01-21 23:22:50.68016304 +0000 UTC m=+0.172413420 container attach 3f71a0e9f90cfcd55b752dee48bec3b43eb87a6971046946f583e094cf907dea (image=quay.io/ceph/ceph:v18, name=eloquent_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 21 23:22:51 compute-0 ceph-mon[73963]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:22:51 compute-0 ceph-mon[73963]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1688531933' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:22:51 compute-0 systemd[1]: libpod-3f71a0e9f90cfcd55b752dee48bec3b43eb87a6971046946f583e094cf907dea.scope: Deactivated successfully.
Jan 21 23:22:51 compute-0 podman[74111]: 2026-01-21 23:22:51.046106105 +0000 UTC m=+0.538356485 container died 3f71a0e9f90cfcd55b752dee48bec3b43eb87a6971046946f583e094cf907dea (image=quay.io/ceph/ceph:v18, name=eloquent_hoover, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:22:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b889990dee903b0e694a5fe434705d9e976bea8085d6c7e4173793561f52b977-merged.mount: Deactivated successfully.
Jan 21 23:22:51 compute-0 podman[74111]: 2026-01-21 23:22:51.081266624 +0000 UTC m=+0.573516994 container remove 3f71a0e9f90cfcd55b752dee48bec3b43eb87a6971046946f583e094cf907dea (image=quay.io/ceph/ceph:v18, name=eloquent_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 23:22:51 compute-0 systemd[1]: libpod-conmon-3f71a0e9f90cfcd55b752dee48bec3b43eb87a6971046946f583e094cf907dea.scope: Deactivated successfully.
Jan 21 23:22:51 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 3759241a-7f1c-520d-ba17-879943ee2f00...
Jan 21 23:22:51 compute-0 ceph-mon[73963]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 21 23:22:51 compute-0 ceph-mon[73963]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 21 23:22:51 compute-0 ceph-mon[73963]: mon.compute-0@0(leader) e1 shutdown
Jan 21 23:22:51 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0[73959]: 2026-01-21T23:22:51.272+0000 7f7678cfb640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 21 23:22:51 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0[73959]: 2026-01-21T23:22:51.272+0000 7f7678cfb640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 21 23:22:51 compute-0 ceph-mon[73963]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 21 23:22:51 compute-0 ceph-mon[73963]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 21 23:22:51 compute-0 podman[74196]: 2026-01-21 23:22:51.465943664 +0000 UTC m=+0.235342340 container died fb71e23b8cf79eded8c06f2bd8ba7b1b542dec43e0033e5cd2671913a0182c07 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 21 23:22:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f567af4313728c14f3e90cc1f9a38527cd7a1e461c7fc4bcc171148a3a78652b-merged.mount: Deactivated successfully.
Jan 21 23:22:51 compute-0 podman[74196]: 2026-01-21 23:22:51.510198321 +0000 UTC m=+0.279597007 container remove fb71e23b8cf79eded8c06f2bd8ba7b1b542dec43e0033e5cd2671913a0182c07 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:22:51 compute-0 bash[74196]: ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0
Jan 21 23:22:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 23:22:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 23:22:51 compute-0 systemd[1]: ceph-3759241a-7f1c-520d-ba17-879943ee2f00@mon.compute-0.service: Deactivated successfully.
Jan 21 23:22:51 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 3759241a-7f1c-520d-ba17-879943ee2f00.
Jan 21 23:22:51 compute-0 systemd[1]: ceph-3759241a-7f1c-520d-ba17-879943ee2f00@mon.compute-0.service: Consumed 1.051s CPU time.
Jan 21 23:22:51 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 3759241a-7f1c-520d-ba17-879943ee2f00...
Jan 21 23:22:51 compute-0 podman[74299]: 2026-01-21 23:22:51.970940466 +0000 UTC m=+0.058005201 container create 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 21 23:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b1eda41034347a2539b68ae74e16bf2538ec44042abb5ab718e3131a91a714/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b1eda41034347a2539b68ae74e16bf2538ec44042abb5ab718e3131a91a714/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b1eda41034347a2539b68ae74e16bf2538ec44042abb5ab718e3131a91a714/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b1eda41034347a2539b68ae74e16bf2538ec44042abb5ab718e3131a91a714/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:52 compute-0 podman[74299]: 2026-01-21 23:22:51.950816768 +0000 UTC m=+0.037881533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:52 compute-0 podman[74299]: 2026-01-21 23:22:52.048531895 +0000 UTC m=+0.135596710 container init 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 23:22:52 compute-0 podman[74299]: 2026-01-21 23:22:52.059709568 +0000 UTC m=+0.146774333 container start 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 21 23:22:52 compute-0 bash[74299]: 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5
Jan 21 23:22:52 compute-0 systemd[1]: Started Ceph mon.compute-0 for 3759241a-7f1c-520d-ba17-879943ee2f00.
Jan 21 23:22:52 compute-0 ceph-mon[74318]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 23:22:52 compute-0 ceph-mon[74318]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 21 23:22:52 compute-0 ceph-mon[74318]: pidfile_write: ignore empty --pid-file
Jan 21 23:22:52 compute-0 ceph-mon[74318]: load: jerasure load: lrc 
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: RocksDB version: 7.9.2
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Git sha 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: DB SUMMARY
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: DB Session ID:  L1WW76NSVK36J4VFL8VG
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: CURRENT file:  CURRENT
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 51604 ; 
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                         Options.error_if_exists: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                       Options.create_if_missing: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                                     Options.env: 0x559f1c784c40
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                                Options.info_log: 0x559f1db37040
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                              Options.statistics: (nil)
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                               Options.use_fsync: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                              Options.db_log_dir: 
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                                 Options.wal_dir: 
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                    Options.write_buffer_manager: 0x559f1db46b40
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                  Options.unordered_write: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                               Options.row_cache: None
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                              Options.wal_filter: None
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.two_write_queues: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.wal_compression: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.atomic_flush: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.max_background_jobs: 2
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.max_background_compactions: -1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.max_subcompactions: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.max_total_wal_size: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                          Options.max_open_files: -1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:       Options.compaction_readahead_size: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Compression algorithms supported:
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         kZSTD supported: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         kXpressCompression supported: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         kBZip2Compression supported: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         kLZ4Compression supported: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         kZlibCompression supported: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         kLZ4HCCompression supported: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         kSnappyCompression supported: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:           Options.merge_operator: 
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:        Options.compaction_filter: None
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f1db36c40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559f1db2f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:        Options.write_buffer_size: 33554432
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:  Options.max_write_buffer_number: 2
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:          Options.compression: NoCompression
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.num_levels: 7
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 756e4229-f67c-4e5b-91a0-5975df843718
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769037772118862, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769037772123962, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 51378, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 127, "table_properties": {"data_size": 49931, "index_size": 153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2823, "raw_average_key_size": 30, "raw_value_size": 47663, "raw_average_value_size": 507, "num_data_blocks": 7, "num_entries": 94, "num_filter_entries": 94, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037772, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769037772124105, "job": 1, "event": "recovery_finished"}
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559f1db58e00
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: DB pointer 0x559f1dbe2000
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 23:22:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   52.07 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0   52.07 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f1db2f1f0#2 capacity: 512.00 MB usage: 0.77 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.34 KB,6.55651e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 21 23:22:52 compute-0 ceph-mon[74318]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:22:52 compute-0 ceph-mon[74318]: mon.compute-0@-1(???) e1 preinit fsid 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:22:52 compute-0 ceph-mon[74318]: mon.compute-0@-1(???).mds e1 new map
Jan 21 23:22:52 compute-0 ceph-mon[74318]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 21 23:22:52 compute-0 ceph-mon[74318]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 21 23:22:52 compute-0 ceph-mon[74318]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 23:22:52 compute-0 ceph-mon[74318]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 23:22:52 compute-0 ceph-mon[74318]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 23:22:52 compute-0 ceph-mon[74318]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 21 23:22:52 compute-0 ceph-mon[74318]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 21 23:22:52 compute-0 ceph-mon[74318]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 21 23:22:52 compute-0 ceph-mon[74318]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 21 23:22:52 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 23:22:52 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 23:22:52 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 21 23:22:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 23:22:52 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : fsmap 
Jan 21 23:22:52 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 21 23:22:52 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 21 23:22:52 compute-0 podman[74319]: 2026-01-21 23:22:52.168458244 +0000 UTC m=+0.063350304 container create 8c593e6c60013f62a4bd293d7986adaba82474c2d1f49b96408760311e3f9438 (image=quay.io/ceph/ceph:v18, name=romantic_sanderson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:22:52 compute-0 ceph-mon[74318]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 23:22:52 compute-0 ceph-mon[74318]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 21 23:22:52 compute-0 ceph-mon[74318]: fsmap 
Jan 21 23:22:52 compute-0 ceph-mon[74318]: osdmap e1: 0 total, 0 up, 0 in
Jan 21 23:22:52 compute-0 ceph-mon[74318]: mgrmap e1: no daemons active
Jan 21 23:22:52 compute-0 systemd[1]: Started libpod-conmon-8c593e6c60013f62a4bd293d7986adaba82474c2d1f49b96408760311e3f9438.scope.
Jan 21 23:22:52 compute-0 podman[74319]: 2026-01-21 23:22:52.147020267 +0000 UTC m=+0.041912427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:52 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55cc1f73b4914f07f2f9d996acbe916ea9f8b7d278f7dd07c634013bb2446cc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55cc1f73b4914f07f2f9d996acbe916ea9f8b7d278f7dd07c634013bb2446cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55cc1f73b4914f07f2f9d996acbe916ea9f8b7d278f7dd07c634013bb2446cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:52 compute-0 podman[74319]: 2026-01-21 23:22:52.289287921 +0000 UTC m=+0.184179991 container init 8c593e6c60013f62a4bd293d7986adaba82474c2d1f49b96408760311e3f9438 (image=quay.io/ceph/ceph:v18, name=romantic_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:22:52 compute-0 podman[74319]: 2026-01-21 23:22:52.300938718 +0000 UTC m=+0.195830778 container start 8c593e6c60013f62a4bd293d7986adaba82474c2d1f49b96408760311e3f9438 (image=quay.io/ceph/ceph:v18, name=romantic_sanderson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 23:22:52 compute-0 podman[74319]: 2026-01-21 23:22:52.305023154 +0000 UTC m=+0.199915234 container attach 8c593e6c60013f62a4bd293d7986adaba82474c2d1f49b96408760311e3f9438 (image=quay.io/ceph/ceph:v18, name=romantic_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:22:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Jan 21 23:22:52 compute-0 systemd[1]: libpod-8c593e6c60013f62a4bd293d7986adaba82474c2d1f49b96408760311e3f9438.scope: Deactivated successfully.
Jan 21 23:22:52 compute-0 podman[74319]: 2026-01-21 23:22:52.741831554 +0000 UTC m=+0.636723604 container died 8c593e6c60013f62a4bd293d7986adaba82474c2d1f49b96408760311e3f9438 (image=quay.io/ceph/ceph:v18, name=romantic_sanderson, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:22:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f55cc1f73b4914f07f2f9d996acbe916ea9f8b7d278f7dd07c634013bb2446cc-merged.mount: Deactivated successfully.
Jan 21 23:22:52 compute-0 podman[74319]: 2026-01-21 23:22:52.795377125 +0000 UTC m=+0.690269175 container remove 8c593e6c60013f62a4bd293d7986adaba82474c2d1f49b96408760311e3f9438 (image=quay.io/ceph/ceph:v18, name=romantic_sanderson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 23:22:52 compute-0 systemd[1]: libpod-conmon-8c593e6c60013f62a4bd293d7986adaba82474c2d1f49b96408760311e3f9438.scope: Deactivated successfully.
Jan 21 23:22:52 compute-0 podman[74411]: 2026-01-21 23:22:52.872200392 +0000 UTC m=+0.049545120 container create b08f110b298fd0aa87887097553fd00a116c4afb228ebd46ba9e333091b980dd (image=quay.io/ceph/ceph:v18, name=great_curie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:22:52 compute-0 systemd[1]: Started libpod-conmon-b08f110b298fd0aa87887097553fd00a116c4afb228ebd46ba9e333091b980dd.scope.
Jan 21 23:22:52 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b84f794f7ba69482d65628350582195fe8ca466a75e5c6e2c04abfa960e6990e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b84f794f7ba69482d65628350582195fe8ca466a75e5c6e2c04abfa960e6990e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b84f794f7ba69482d65628350582195fe8ca466a75e5c6e2c04abfa960e6990e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:52 compute-0 podman[74411]: 2026-01-21 23:22:52.855087207 +0000 UTC m=+0.032431945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:52 compute-0 podman[74411]: 2026-01-21 23:22:52.966329689 +0000 UTC m=+0.143674487 container init b08f110b298fd0aa87887097553fd00a116c4afb228ebd46ba9e333091b980dd (image=quay.io/ceph/ceph:v18, name=great_curie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 21 23:22:52 compute-0 podman[74411]: 2026-01-21 23:22:52.976664117 +0000 UTC m=+0.154008875 container start b08f110b298fd0aa87887097553fd00a116c4afb228ebd46ba9e333091b980dd (image=quay.io/ceph/ceph:v18, name=great_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 21 23:22:52 compute-0 podman[74411]: 2026-01-21 23:22:52.981094583 +0000 UTC m=+0.158439331 container attach b08f110b298fd0aa87887097553fd00a116c4afb228ebd46ba9e333091b980dd (image=quay.io/ceph/ceph:v18, name=great_curie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:22:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Jan 21 23:22:53 compute-0 systemd[1]: libpod-b08f110b298fd0aa87887097553fd00a116c4afb228ebd46ba9e333091b980dd.scope: Deactivated successfully.
Jan 21 23:22:53 compute-0 conmon[74429]: conmon b08f110b298fd0aa8788 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b08f110b298fd0aa87887097553fd00a116c4afb228ebd46ba9e333091b980dd.scope/container/memory.events
Jan 21 23:22:53 compute-0 podman[74411]: 2026-01-21 23:22:53.388930653 +0000 UTC m=+0.566275411 container died b08f110b298fd0aa87887097553fd00a116c4afb228ebd46ba9e333091b980dd (image=quay.io/ceph/ceph:v18, name=great_curie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 21 23:22:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b84f794f7ba69482d65628350582195fe8ca466a75e5c6e2c04abfa960e6990e-merged.mount: Deactivated successfully.
Jan 21 23:22:53 compute-0 podman[74411]: 2026-01-21 23:22:53.428782666 +0000 UTC m=+0.606127384 container remove b08f110b298fd0aa87887097553fd00a116c4afb228ebd46ba9e333091b980dd (image=quay.io/ceph/ceph:v18, name=great_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:22:53 compute-0 systemd[1]: libpod-conmon-b08f110b298fd0aa87887097553fd00a116c4afb228ebd46ba9e333091b980dd.scope: Deactivated successfully.
Jan 21 23:22:53 compute-0 systemd[1]: Reloading.
Jan 21 23:22:53 compute-0 systemd-rc-local-generator[74497]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:22:53 compute-0 systemd-sysv-generator[74500]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:22:53 compute-0 systemd[1]: Reloading.
Jan 21 23:22:53 compute-0 systemd-sysv-generator[74543]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:22:53 compute-0 systemd-rc-local-generator[74539]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:22:53 compute-0 systemd[1]: Starting Ceph mgr.compute-0.boqcsl for 3759241a-7f1c-520d-ba17-879943ee2f00...
Jan 21 23:22:54 compute-0 podman[74595]: 2026-01-21 23:22:54.276945324 +0000 UTC m=+0.067444050 container create 1a53c738ce795e9ed95ab1b117fba7b847c69d9b5cd04dbc6af0ef99331b1962 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 23:22:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ad2ee2590cea40dbd2ef5c9a65359573d7309f75db3a6a428e5a6863e59bc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ad2ee2590cea40dbd2ef5c9a65359573d7309f75db3a6a428e5a6863e59bc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ad2ee2590cea40dbd2ef5c9a65359573d7309f75db3a6a428e5a6863e59bc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ad2ee2590cea40dbd2ef5c9a65359573d7309f75db3a6a428e5a6863e59bc0/merged/var/lib/ceph/mgr/ceph-compute-0.boqcsl supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:54 compute-0 podman[74595]: 2026-01-21 23:22:54.341718191 +0000 UTC m=+0.132216967 container init 1a53c738ce795e9ed95ab1b117fba7b847c69d9b5cd04dbc6af0ef99331b1962 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:22:54 compute-0 podman[74595]: 2026-01-21 23:22:54.249951306 +0000 UTC m=+0.040450082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:54 compute-0 podman[74595]: 2026-01-21 23:22:54.34985336 +0000 UTC m=+0.140352086 container start 1a53c738ce795e9ed95ab1b117fba7b847c69d9b5cd04dbc6af0ef99331b1962 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 23:22:54 compute-0 bash[74595]: 1a53c738ce795e9ed95ab1b117fba7b847c69d9b5cd04dbc6af0ef99331b1962
Jan 21 23:22:54 compute-0 systemd[1]: Started Ceph mgr.compute-0.boqcsl for 3759241a-7f1c-520d-ba17-879943ee2f00.
Jan 21 23:22:54 compute-0 ceph-mgr[74614]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 23:22:54 compute-0 ceph-mgr[74614]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 21 23:22:54 compute-0 ceph-mgr[74614]: pidfile_write: ignore empty --pid-file
Jan 21 23:22:54 compute-0 podman[74616]: 2026-01-21 23:22:54.472085581 +0000 UTC m=+0.067859353 container create 8ba6cd3d0ac78b123170cc50f8a4ada9142a19e0e3df3c995f9cce01c332969b (image=quay.io/ceph/ceph:v18, name=interesting_greider, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:22:54 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'alerts'
Jan 21 23:22:54 compute-0 systemd[1]: Started libpod-conmon-8ba6cd3d0ac78b123170cc50f8a4ada9142a19e0e3df3c995f9cce01c332969b.scope.
Jan 21 23:22:54 compute-0 podman[74616]: 2026-01-21 23:22:54.446045632 +0000 UTC m=+0.041819334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:54 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:22:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f51860fd5593d5f1c92fd1d6787a08e1ae4065e04cd7287529abe0fb21351f2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f51860fd5593d5f1c92fd1d6787a08e1ae4065e04cd7287529abe0fb21351f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f51860fd5593d5f1c92fd1d6787a08e1ae4065e04cd7287529abe0fb21351f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:54 compute-0 podman[74616]: 2026-01-21 23:22:54.579732732 +0000 UTC m=+0.175506394 container init 8ba6cd3d0ac78b123170cc50f8a4ada9142a19e0e3df3c995f9cce01c332969b (image=quay.io/ceph/ceph:v18, name=interesting_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 21 23:22:54 compute-0 podman[74616]: 2026-01-21 23:22:54.590356758 +0000 UTC m=+0.186130420 container start 8ba6cd3d0ac78b123170cc50f8a4ada9142a19e0e3df3c995f9cce01c332969b (image=quay.io/ceph/ceph:v18, name=interesting_greider, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 21 23:22:54 compute-0 podman[74616]: 2026-01-21 23:22:54.594340281 +0000 UTC m=+0.190113913 container attach 8ba6cd3d0ac78b123170cc50f8a4ada9142a19e0e3df3c995f9cce01c332969b (image=quay.io/ceph/ceph:v18, name=interesting_greider, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:22:54 compute-0 ceph-mgr[74614]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 21 23:22:54 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'balancer'
Jan 21 23:22:54 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:22:54.793+0000 7fb93ccc0140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 21 23:22:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 21 23:22:55 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3300378202' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:22:55 compute-0 interesting_greider[74656]: 
Jan 21 23:22:55 compute-0 interesting_greider[74656]: {
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     "fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     "health": {
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "status": "HEALTH_OK",
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "checks": {},
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "mutes": []
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     },
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     "election_epoch": 5,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     "quorum": [
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         0
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     ],
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     "quorum_names": [
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "compute-0"
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     ],
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     "quorum_age": 2,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     "monmap": {
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "epoch": 1,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "min_mon_release_name": "reef",
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "num_mons": 1
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     },
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     "osdmap": {
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "epoch": 1,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "num_osds": 0,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "num_up_osds": 0,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "osd_up_since": 0,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "num_in_osds": 0,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "osd_in_since": 0,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "num_remapped_pgs": 0
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     },
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     "pgmap": {
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "pgs_by_state": [],
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "num_pgs": 0,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "num_pools": 0,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "num_objects": 0,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "data_bytes": 0,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "bytes_used": 0,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "bytes_avail": 0,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "bytes_total": 0
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     },
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     "fsmap": {
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "epoch": 1,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "by_rank": [],
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "up:standby": 0
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     },
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     "mgrmap": {
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "available": false,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "num_standbys": 0,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "modules": [
Jan 21 23:22:55 compute-0 interesting_greider[74656]:             "iostat",
Jan 21 23:22:55 compute-0 interesting_greider[74656]:             "nfs",
Jan 21 23:22:55 compute-0 interesting_greider[74656]:             "restful"
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         ],
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "services": {}
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     },
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     "servicemap": {
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "epoch": 1,
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "modified": "2026-01-21T23:22:49.246100+0000",
Jan 21 23:22:55 compute-0 interesting_greider[74656]:         "services": {}
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     },
Jan 21 23:22:55 compute-0 interesting_greider[74656]:     "progress_events": {}
Jan 21 23:22:55 compute-0 interesting_greider[74656]: }
Jan 21 23:22:55 compute-0 ceph-mgr[74614]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 21 23:22:55 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'cephadm'
Jan 21 23:22:55 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:22:55.042+0000 7fb93ccc0140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 21 23:22:55 compute-0 systemd[1]: libpod-8ba6cd3d0ac78b123170cc50f8a4ada9142a19e0e3df3c995f9cce01c332969b.scope: Deactivated successfully.
Jan 21 23:22:55 compute-0 podman[74616]: 2026-01-21 23:22:55.044695876 +0000 UTC m=+0.640469528 container died 8ba6cd3d0ac78b123170cc50f8a4ada9142a19e0e3df3c995f9cce01c332969b (image=quay.io/ceph/ceph:v18, name=interesting_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:22:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f51860fd5593d5f1c92fd1d6787a08e1ae4065e04cd7287529abe0fb21351f2-merged.mount: Deactivated successfully.
Jan 21 23:22:55 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3300378202' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:22:55 compute-0 podman[74616]: 2026-01-21 23:22:55.101358584 +0000 UTC m=+0.697132246 container remove 8ba6cd3d0ac78b123170cc50f8a4ada9142a19e0e3df3c995f9cce01c332969b (image=quay.io/ceph/ceph:v18, name=interesting_greider, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:22:55 compute-0 systemd[1]: libpod-conmon-8ba6cd3d0ac78b123170cc50f8a4ada9142a19e0e3df3c995f9cce01c332969b.scope: Deactivated successfully.
Jan 21 23:22:56 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'crash'
Jan 21 23:22:57 compute-0 podman[74707]: 2026-01-21 23:22:57.19147031 +0000 UTC m=+0.064128028 container create 4aa2383e93104c6486ac95da0afeead39c3f80acf9fa5a0d13fda3d0fcef43d4 (image=quay.io/ceph/ceph:v18, name=wizardly_mendel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:22:57 compute-0 ceph-mgr[74614]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 21 23:22:57 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'dashboard'
Jan 21 23:22:57 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:22:57.236+0000 7fb93ccc0140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 21 23:22:57 compute-0 podman[74707]: 2026-01-21 23:22:57.156649713 +0000 UTC m=+0.029307521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:22:57 compute-0 systemd[1]: Started libpod-conmon-4aa2383e93104c6486ac95da0afeead39c3f80acf9fa5a0d13fda3d0fcef43d4.scope.
Jan 21 23:22:57 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:22:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb0c934eba5903f0698fc3e2998a0db8d995223c0e78c972f73f6faee7f6123/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb0c934eba5903f0698fc3e2998a0db8d995223c0e78c972f73f6faee7f6123/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb0c934eba5903f0698fc3e2998a0db8d995223c0e78c972f73f6faee7f6123/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:22:57 compute-0 podman[74707]: 2026-01-21 23:22:57.55787673 +0000 UTC m=+0.430534538 container init 4aa2383e93104c6486ac95da0afeead39c3f80acf9fa5a0d13fda3d0fcef43d4 (image=quay.io/ceph/ceph:v18, name=wizardly_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:22:57 compute-0 podman[74707]: 2026-01-21 23:22:57.568932019 +0000 UTC m=+0.441589757 container start 4aa2383e93104c6486ac95da0afeead39c3f80acf9fa5a0d13fda3d0fcef43d4 (image=quay.io/ceph/ceph:v18, name=wizardly_mendel, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:22:57 compute-0 podman[74707]: 2026-01-21 23:22:57.572993414 +0000 UTC m=+0.445651182 container attach 4aa2383e93104c6486ac95da0afeead39c3f80acf9fa5a0d13fda3d0fcef43d4 (image=quay.io/ceph/ceph:v18, name=wizardly_mendel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:22:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 21 23:22:57 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2387191925' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]: 
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]: {
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     "fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     "health": {
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "status": "HEALTH_OK",
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "checks": {},
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "mutes": []
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     },
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     "election_epoch": 5,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     "quorum": [
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         0
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     ],
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     "quorum_names": [
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "compute-0"
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     ],
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     "quorum_age": 5,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     "monmap": {
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "epoch": 1,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "min_mon_release_name": "reef",
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "num_mons": 1
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     },
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     "osdmap": {
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "epoch": 1,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "num_osds": 0,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "num_up_osds": 0,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "osd_up_since": 0,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "num_in_osds": 0,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "osd_in_since": 0,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "num_remapped_pgs": 0
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     },
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     "pgmap": {
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "pgs_by_state": [],
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "num_pgs": 0,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "num_pools": 0,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "num_objects": 0,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "data_bytes": 0,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "bytes_used": 0,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "bytes_avail": 0,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "bytes_total": 0
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     },
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     "fsmap": {
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "epoch": 1,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "by_rank": [],
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "up:standby": 0
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     },
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     "mgrmap": {
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "available": false,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "num_standbys": 0,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "modules": [
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:             "iostat",
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:             "nfs",
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:             "restful"
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         ],
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "services": {}
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     },
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     "servicemap": {
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "epoch": 1,
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "modified": "2026-01-21T23:22:49.246100+0000",
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:         "services": {}
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     },
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]:     "progress_events": {}
Jan 21 23:22:57 compute-0 wizardly_mendel[74723]: }
Jan 21 23:22:57 compute-0 systemd[1]: libpod-4aa2383e93104c6486ac95da0afeead39c3f80acf9fa5a0d13fda3d0fcef43d4.scope: Deactivated successfully.
Jan 21 23:22:57 compute-0 podman[74707]: 2026-01-21 23:22:57.983257989 +0000 UTC m=+0.855915717 container died 4aa2383e93104c6486ac95da0afeead39c3f80acf9fa5a0d13fda3d0fcef43d4 (image=quay.io/ceph/ceph:v18, name=wizardly_mendel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:22:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-3eb0c934eba5903f0698fc3e2998a0db8d995223c0e78c972f73f6faee7f6123-merged.mount: Deactivated successfully.
Jan 21 23:22:58 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2387191925' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:22:58 compute-0 podman[74707]: 2026-01-21 23:22:58.035398569 +0000 UTC m=+0.908056297 container remove 4aa2383e93104c6486ac95da0afeead39c3f80acf9fa5a0d13fda3d0fcef43d4 (image=quay.io/ceph/ceph:v18, name=wizardly_mendel, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 21 23:22:58 compute-0 systemd[1]: libpod-conmon-4aa2383e93104c6486ac95da0afeead39c3f80acf9fa5a0d13fda3d0fcef43d4.scope: Deactivated successfully.
Jan 21 23:22:58 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'devicehealth'
Jan 21 23:22:58 compute-0 ceph-mgr[74614]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 21 23:22:58 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'diskprediction_local'
Jan 21 23:22:58 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:22:58.810+0000 7fb93ccc0140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 21 23:22:59 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 21 23:22:59 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 21 23:22:59 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]:   from numpy import show_config as show_numpy_config
Jan 21 23:22:59 compute-0 ceph-mgr[74614]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 21 23:22:59 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'influx'
Jan 21 23:22:59 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:22:59.289+0000 7fb93ccc0140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 21 23:22:59 compute-0 ceph-mgr[74614]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 21 23:22:59 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'insights'
Jan 21 23:22:59 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:22:59.506+0000 7fb93ccc0140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 21 23:22:59 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'iostat'
Jan 21 23:22:59 compute-0 ceph-mgr[74614]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 21 23:22:59 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'k8sevents'
Jan 21 23:22:59 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:22:59.942+0000 7fb93ccc0140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 21 23:23:00 compute-0 podman[74763]: 2026-01-21 23:23:00.117975294 +0000 UTC m=+0.056747822 container create b38dd6c0df1e325694be069d0103907f29d9fc17f236f7007fbdd4084f417f66 (image=quay.io/ceph/ceph:v18, name=hopeful_pike, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:23:00 compute-0 systemd[1]: Started libpod-conmon-b38dd6c0df1e325694be069d0103907f29d9fc17f236f7007fbdd4084f417f66.scope.
Jan 21 23:23:00 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:00 compute-0 podman[74763]: 2026-01-21 23:23:00.089147309 +0000 UTC m=+0.027919897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3452fd2a66311ee7ac7e84cf07388cd352f2154c1d9f4c651b48d18b9f286b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3452fd2a66311ee7ac7e84cf07388cd352f2154c1d9f4c651b48d18b9f286b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3452fd2a66311ee7ac7e84cf07388cd352f2154c1d9f4c651b48d18b9f286b7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:00 compute-0 podman[74763]: 2026-01-21 23:23:00.208842431 +0000 UTC m=+0.147615019 container init b38dd6c0df1e325694be069d0103907f29d9fc17f236f7007fbdd4084f417f66 (image=quay.io/ceph/ceph:v18, name=hopeful_pike, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:23:00 compute-0 podman[74763]: 2026-01-21 23:23:00.21990656 +0000 UTC m=+0.158679098 container start b38dd6c0df1e325694be069d0103907f29d9fc17f236f7007fbdd4084f417f66 (image=quay.io/ceph/ceph:v18, name=hopeful_pike, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:23:00 compute-0 podman[74763]: 2026-01-21 23:23:00.224839492 +0000 UTC m=+0.163612030 container attach b38dd6c0df1e325694be069d0103907f29d9fc17f236f7007fbdd4084f417f66 (image=quay.io/ceph/ceph:v18, name=hopeful_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 21 23:23:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 21 23:23:00 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/390350962' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:23:00 compute-0 hopeful_pike[74780]: 
Jan 21 23:23:00 compute-0 hopeful_pike[74780]: {
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     "fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     "health": {
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "status": "HEALTH_OK",
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "checks": {},
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "mutes": []
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     },
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     "election_epoch": 5,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     "quorum": [
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         0
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     ],
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     "quorum_names": [
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "compute-0"
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     ],
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     "quorum_age": 8,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     "monmap": {
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "epoch": 1,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "min_mon_release_name": "reef",
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "num_mons": 1
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     },
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     "osdmap": {
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "epoch": 1,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "num_osds": 0,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "num_up_osds": 0,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "osd_up_since": 0,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "num_in_osds": 0,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "osd_in_since": 0,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "num_remapped_pgs": 0
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     },
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     "pgmap": {
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "pgs_by_state": [],
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "num_pgs": 0,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "num_pools": 0,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "num_objects": 0,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "data_bytes": 0,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "bytes_used": 0,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "bytes_avail": 0,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "bytes_total": 0
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     },
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     "fsmap": {
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "epoch": 1,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "by_rank": [],
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "up:standby": 0
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     },
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     "mgrmap": {
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "available": false,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "num_standbys": 0,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "modules": [
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:             "iostat",
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:             "nfs",
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:             "restful"
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         ],
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "services": {}
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     },
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     "servicemap": {
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "epoch": 1,
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "modified": "2026-01-21T23:22:49.246100+0000",
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:         "services": {}
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     },
Jan 21 23:23:00 compute-0 hopeful_pike[74780]:     "progress_events": {}
Jan 21 23:23:00 compute-0 hopeful_pike[74780]: }
Jan 21 23:23:00 compute-0 systemd[1]: libpod-b38dd6c0df1e325694be069d0103907f29d9fc17f236f7007fbdd4084f417f66.scope: Deactivated successfully.
Jan 21 23:23:00 compute-0 podman[74763]: 2026-01-21 23:23:00.629997801 +0000 UTC m=+0.568770339 container died b38dd6c0df1e325694be069d0103907f29d9fc17f236f7007fbdd4084f417f66 (image=quay.io/ceph/ceph:v18, name=hopeful_pike, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 23:23:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3452fd2a66311ee7ac7e84cf07388cd352f2154c1d9f4c651b48d18b9f286b7-merged.mount: Deactivated successfully.
Jan 21 23:23:00 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/390350962' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:23:00 compute-0 podman[74763]: 2026-01-21 23:23:00.693618452 +0000 UTC m=+0.632390990 container remove b38dd6c0df1e325694be069d0103907f29d9fc17f236f7007fbdd4084f417f66 (image=quay.io/ceph/ceph:v18, name=hopeful_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Jan 21 23:23:00 compute-0 systemd[1]: libpod-conmon-b38dd6c0df1e325694be069d0103907f29d9fc17f236f7007fbdd4084f417f66.scope: Deactivated successfully.
Jan 21 23:23:01 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'localpool'
Jan 21 23:23:01 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'mds_autoscaler'
Jan 21 23:23:02 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'mirroring'
Jan 21 23:23:02 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'nfs'
Jan 21 23:23:02 compute-0 podman[74820]: 2026-01-21 23:23:02.792272841 +0000 UTC m=+0.066949706 container create a75650f333d70731c26d3ad0e40a0fdde2f4cdf9cdea0cf48e71bbb10d8a2710 (image=quay.io/ceph/ceph:v18, name=jovial_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 21 23:23:02 compute-0 systemd[1]: Started libpod-conmon-a75650f333d70731c26d3ad0e40a0fdde2f4cdf9cdea0cf48e71bbb10d8a2710.scope.
Jan 21 23:23:02 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:02 compute-0 podman[74820]: 2026-01-21 23:23:02.768717528 +0000 UTC m=+0.043394423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a9db141fcf6b3cc2336428655fa21f82feb238eddce15ca145b7aeb4726e35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a9db141fcf6b3cc2336428655fa21f82feb238eddce15ca145b7aeb4726e35/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a9db141fcf6b3cc2336428655fa21f82feb238eddce15ca145b7aeb4726e35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:02 compute-0 podman[74820]: 2026-01-21 23:23:02.874934286 +0000 UTC m=+0.149611181 container init a75650f333d70731c26d3ad0e40a0fdde2f4cdf9cdea0cf48e71bbb10d8a2710 (image=quay.io/ceph/ceph:v18, name=jovial_almeida, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 21 23:23:02 compute-0 podman[74820]: 2026-01-21 23:23:02.884955823 +0000 UTC m=+0.159632728 container start a75650f333d70731c26d3ad0e40a0fdde2f4cdf9cdea0cf48e71bbb10d8a2710 (image=quay.io/ceph/ceph:v18, name=jovial_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 21 23:23:02 compute-0 podman[74820]: 2026-01-21 23:23:02.900604864 +0000 UTC m=+0.175281759 container attach a75650f333d70731c26d3ad0e40a0fdde2f4cdf9cdea0cf48e71bbb10d8a2710 (image=quay.io/ceph/ceph:v18, name=jovial_almeida, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:23:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 21 23:23:03 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1164796346' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:23:03 compute-0 jovial_almeida[74836]: 
Jan 21 23:23:03 compute-0 jovial_almeida[74836]: {
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     "fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     "health": {
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "status": "HEALTH_OK",
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "checks": {},
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "mutes": []
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     },
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     "election_epoch": 5,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     "quorum": [
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         0
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     ],
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     "quorum_names": [
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "compute-0"
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     ],
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     "quorum_age": 11,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     "monmap": {
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "epoch": 1,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "min_mon_release_name": "reef",
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "num_mons": 1
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     },
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     "osdmap": {
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "epoch": 1,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "num_osds": 0,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "num_up_osds": 0,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "osd_up_since": 0,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "num_in_osds": 0,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "osd_in_since": 0,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "num_remapped_pgs": 0
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     },
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     "pgmap": {
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "pgs_by_state": [],
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "num_pgs": 0,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "num_pools": 0,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "num_objects": 0,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "data_bytes": 0,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "bytes_used": 0,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "bytes_avail": 0,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "bytes_total": 0
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     },
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     "fsmap": {
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "epoch": 1,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "by_rank": [],
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "up:standby": 0
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     },
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     "mgrmap": {
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "available": false,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "num_standbys": 0,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "modules": [
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:             "iostat",
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:             "nfs",
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:             "restful"
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         ],
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "services": {}
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     },
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     "servicemap": {
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "epoch": 1,
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "modified": "2026-01-21T23:22:49.246100+0000",
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:         "services": {}
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     },
Jan 21 23:23:03 compute-0 jovial_almeida[74836]:     "progress_events": {}
Jan 21 23:23:03 compute-0 jovial_almeida[74836]: }
Jan 21 23:23:03 compute-0 systemd[1]: libpod-a75650f333d70731c26d3ad0e40a0fdde2f4cdf9cdea0cf48e71bbb10d8a2710.scope: Deactivated successfully.
Jan 21 23:23:03 compute-0 podman[74820]: 2026-01-21 23:23:03.290795312 +0000 UTC m=+0.565472217 container died a75650f333d70731c26d3ad0e40a0fdde2f4cdf9cdea0cf48e71bbb10d8a2710 (image=quay.io/ceph/ceph:v18, name=jovial_almeida, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:23:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-69a9db141fcf6b3cc2336428655fa21f82feb238eddce15ca145b7aeb4726e35-merged.mount: Deactivated successfully.
Jan 21 23:23:03 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1164796346' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:23:03 compute-0 podman[74820]: 2026-01-21 23:23:03.348423431 +0000 UTC m=+0.623100336 container remove a75650f333d70731c26d3ad0e40a0fdde2f4cdf9cdea0cf48e71bbb10d8a2710 (image=quay.io/ceph/ceph:v18, name=jovial_almeida, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 21 23:23:03 compute-0 systemd[1]: libpod-conmon-a75650f333d70731c26d3ad0e40a0fdde2f4cdf9cdea0cf48e71bbb10d8a2710.scope: Deactivated successfully.
Jan 21 23:23:03 compute-0 ceph-mgr[74614]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 21 23:23:03 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'orchestrator'
Jan 21 23:23:03 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:03.402+0000 7fb93ccc0140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 21 23:23:04 compute-0 ceph-mgr[74614]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 21 23:23:04 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:04.076+0000 7fb93ccc0140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 21 23:23:04 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'osd_perf_query'
Jan 21 23:23:04 compute-0 ceph-mgr[74614]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 21 23:23:04 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'osd_support'
Jan 21 23:23:04 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:04.337+0000 7fb93ccc0140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 21 23:23:04 compute-0 ceph-mgr[74614]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 21 23:23:04 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'pg_autoscaler'
Jan 21 23:23:04 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:04.577+0000 7fb93ccc0140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 21 23:23:04 compute-0 ceph-mgr[74614]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 21 23:23:04 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'progress'
Jan 21 23:23:04 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:04.850+0000 7fb93ccc0140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 21 23:23:05 compute-0 ceph-mgr[74614]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 21 23:23:05 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'prometheus'
Jan 21 23:23:05 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:05.099+0000 7fb93ccc0140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 21 23:23:05 compute-0 podman[74875]: 2026-01-21 23:23:05.41907798 +0000 UTC m=+0.044473236 container create 73dd85e46cb2ddcd44707713b825c87262169ed8eec3b5dd28dc57af0cab0116 (image=quay.io/ceph/ceph:v18, name=quirky_shtern, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:23:05 compute-0 systemd[1]: Started libpod-conmon-73dd85e46cb2ddcd44707713b825c87262169ed8eec3b5dd28dc57af0cab0116.scope.
Jan 21 23:23:05 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6dba92cba9e700049d95b92e9a36e90c308f8894d0ebfd420e4fd156e36a9d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6dba92cba9e700049d95b92e9a36e90c308f8894d0ebfd420e4fd156e36a9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6dba92cba9e700049d95b92e9a36e90c308f8894d0ebfd420e4fd156e36a9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:05 compute-0 podman[74875]: 2026-01-21 23:23:05.397010403 +0000 UTC m=+0.022405679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:05 compute-0 podman[74875]: 2026-01-21 23:23:05.518119018 +0000 UTC m=+0.143514294 container init 73dd85e46cb2ddcd44707713b825c87262169ed8eec3b5dd28dc57af0cab0116 (image=quay.io/ceph/ceph:v18, name=quirky_shtern, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 21 23:23:05 compute-0 podman[74875]: 2026-01-21 23:23:05.524064441 +0000 UTC m=+0.149459737 container start 73dd85e46cb2ddcd44707713b825c87262169ed8eec3b5dd28dc57af0cab0116 (image=quay.io/ceph/ceph:v18, name=quirky_shtern, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:23:05 compute-0 podman[74875]: 2026-01-21 23:23:05.527821385 +0000 UTC m=+0.153216681 container attach 73dd85e46cb2ddcd44707713b825c87262169ed8eec3b5dd28dc57af0cab0116 (image=quay.io/ceph/ceph:v18, name=quirky_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 23:23:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 21 23:23:05 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3837365657' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:23:05 compute-0 quirky_shtern[74891]: 
Jan 21 23:23:05 compute-0 quirky_shtern[74891]: {
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     "fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     "health": {
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "status": "HEALTH_OK",
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "checks": {},
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "mutes": []
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     },
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     "election_epoch": 5,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     "quorum": [
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         0
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     ],
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     "quorum_names": [
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "compute-0"
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     ],
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     "quorum_age": 13,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     "monmap": {
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "epoch": 1,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "min_mon_release_name": "reef",
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "num_mons": 1
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     },
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     "osdmap": {
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "epoch": 1,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "num_osds": 0,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "num_up_osds": 0,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "osd_up_since": 0,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "num_in_osds": 0,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "osd_in_since": 0,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "num_remapped_pgs": 0
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     },
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     "pgmap": {
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "pgs_by_state": [],
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "num_pgs": 0,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "num_pools": 0,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "num_objects": 0,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "data_bytes": 0,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "bytes_used": 0,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "bytes_avail": 0,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "bytes_total": 0
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     },
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     "fsmap": {
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "epoch": 1,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "by_rank": [],
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "up:standby": 0
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     },
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     "mgrmap": {
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "available": false,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "num_standbys": 0,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "modules": [
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:             "iostat",
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:             "nfs",
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:             "restful"
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         ],
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "services": {}
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     },
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     "servicemap": {
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "epoch": 1,
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "modified": "2026-01-21T23:22:49.246100+0000",
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:         "services": {}
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     },
Jan 21 23:23:05 compute-0 quirky_shtern[74891]:     "progress_events": {}
Jan 21 23:23:05 compute-0 quirky_shtern[74891]: }
Jan 21 23:23:05 compute-0 systemd[1]: libpod-73dd85e46cb2ddcd44707713b825c87262169ed8eec3b5dd28dc57af0cab0116.scope: Deactivated successfully.
Jan 21 23:23:05 compute-0 podman[74875]: 2026-01-21 23:23:05.911572237 +0000 UTC m=+0.536967493 container died 73dd85e46cb2ddcd44707713b825c87262169ed8eec3b5dd28dc57af0cab0116 (image=quay.io/ceph/ceph:v18, name=quirky_shtern, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:23:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-df6dba92cba9e700049d95b92e9a36e90c308f8894d0ebfd420e4fd156e36a9d-merged.mount: Deactivated successfully.
Jan 21 23:23:05 compute-0 podman[74875]: 2026-01-21 23:23:05.948632654 +0000 UTC m=+0.574027920 container remove 73dd85e46cb2ddcd44707713b825c87262169ed8eec3b5dd28dc57af0cab0116 (image=quay.io/ceph/ceph:v18, name=quirky_shtern, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 21 23:23:05 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3837365657' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:23:05 compute-0 systemd[1]: libpod-conmon-73dd85e46cb2ddcd44707713b825c87262169ed8eec3b5dd28dc57af0cab0116.scope: Deactivated successfully.
Jan 21 23:23:06 compute-0 ceph-mgr[74614]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 21 23:23:06 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'rbd_support'
Jan 21 23:23:06 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:06.117+0000 7fb93ccc0140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 21 23:23:06 compute-0 ceph-mgr[74614]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 21 23:23:06 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'restful'
Jan 21 23:23:06 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:06.420+0000 7fb93ccc0140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 21 23:23:07 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'rgw'
Jan 21 23:23:07 compute-0 ceph-mgr[74614]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 21 23:23:07 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'rook'
Jan 21 23:23:07 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:07.784+0000 7fb93ccc0140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 21 23:23:08 compute-0 podman[74928]: 2026-01-21 23:23:08.039631158 +0000 UTC m=+0.064311404 container create 606c42384704340b53d3d4ca8e26121a24ebb9e005bdae175bf7ee3f08425f4b (image=quay.io/ceph/ceph:v18, name=adoring_gates, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 21 23:23:08 compute-0 systemd[1]: Started libpod-conmon-606c42384704340b53d3d4ca8e26121a24ebb9e005bdae175bf7ee3f08425f4b.scope.
Jan 21 23:23:08 compute-0 podman[74928]: 2026-01-21 23:23:08.012854807 +0000 UTC m=+0.037535103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:08 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b7d7143df476cee840592636a85e2a6573f61206d52866871535f0fec3ab85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b7d7143df476cee840592636a85e2a6573f61206d52866871535f0fec3ab85/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b7d7143df476cee840592636a85e2a6573f61206d52866871535f0fec3ab85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:08 compute-0 podman[74928]: 2026-01-21 23:23:08.139054758 +0000 UTC m=+0.163735054 container init 606c42384704340b53d3d4ca8e26121a24ebb9e005bdae175bf7ee3f08425f4b (image=quay.io/ceph/ceph:v18, name=adoring_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:23:08 compute-0 podman[74928]: 2026-01-21 23:23:08.149925601 +0000 UTC m=+0.174605827 container start 606c42384704340b53d3d4ca8e26121a24ebb9e005bdae175bf7ee3f08425f4b (image=quay.io/ceph/ceph:v18, name=adoring_gates, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:23:08 compute-0 podman[74928]: 2026-01-21 23:23:08.153611094 +0000 UTC m=+0.178291410 container attach 606c42384704340b53d3d4ca8e26121a24ebb9e005bdae175bf7ee3f08425f4b (image=quay.io/ceph/ceph:v18, name=adoring_gates, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 21 23:23:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 21 23:23:08 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2724488859' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:23:08 compute-0 adoring_gates[74944]: 
Jan 21 23:23:08 compute-0 adoring_gates[74944]: {
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     "fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     "health": {
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "status": "HEALTH_OK",
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "checks": {},
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "mutes": []
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     },
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     "election_epoch": 5,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     "quorum": [
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         0
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     ],
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     "quorum_names": [
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "compute-0"
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     ],
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     "quorum_age": 16,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     "monmap": {
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "epoch": 1,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "min_mon_release_name": "reef",
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "num_mons": 1
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     },
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     "osdmap": {
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "epoch": 1,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "num_osds": 0,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "num_up_osds": 0,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "osd_up_since": 0,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "num_in_osds": 0,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "osd_in_since": 0,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "num_remapped_pgs": 0
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     },
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     "pgmap": {
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "pgs_by_state": [],
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "num_pgs": 0,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "num_pools": 0,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "num_objects": 0,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "data_bytes": 0,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "bytes_used": 0,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "bytes_avail": 0,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "bytes_total": 0
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     },
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     "fsmap": {
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "epoch": 1,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "by_rank": [],
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "up:standby": 0
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     },
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     "mgrmap": {
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "available": false,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "num_standbys": 0,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "modules": [
Jan 21 23:23:08 compute-0 adoring_gates[74944]:             "iostat",
Jan 21 23:23:08 compute-0 adoring_gates[74944]:             "nfs",
Jan 21 23:23:08 compute-0 adoring_gates[74944]:             "restful"
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         ],
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "services": {}
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     },
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     "servicemap": {
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "epoch": 1,
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "modified": "2026-01-21T23:22:49.246100+0000",
Jan 21 23:23:08 compute-0 adoring_gates[74944]:         "services": {}
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     },
Jan 21 23:23:08 compute-0 adoring_gates[74944]:     "progress_events": {}
Jan 21 23:23:08 compute-0 adoring_gates[74944]: }
Jan 21 23:23:08 compute-0 systemd[1]: libpod-606c42384704340b53d3d4ca8e26121a24ebb9e005bdae175bf7ee3f08425f4b.scope: Deactivated successfully.
Jan 21 23:23:08 compute-0 podman[74928]: 2026-01-21 23:23:08.55510715 +0000 UTC m=+0.579787366 container died 606c42384704340b53d3d4ca8e26121a24ebb9e005bdae175bf7ee3f08425f4b (image=quay.io/ceph/ceph:v18, name=adoring_gates, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 23:23:08 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2724488859' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:23:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4b7d7143df476cee840592636a85e2a6573f61206d52866871535f0fec3ab85-merged.mount: Deactivated successfully.
Jan 21 23:23:08 compute-0 podman[74928]: 2026-01-21 23:23:08.61701959 +0000 UTC m=+0.641699816 container remove 606c42384704340b53d3d4ca8e26121a24ebb9e005bdae175bf7ee3f08425f4b (image=quay.io/ceph/ceph:v18, name=adoring_gates, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:23:08 compute-0 systemd[1]: libpod-conmon-606c42384704340b53d3d4ca8e26121a24ebb9e005bdae175bf7ee3f08425f4b.scope: Deactivated successfully.
Jan 21 23:23:09 compute-0 ceph-mgr[74614]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 21 23:23:09 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'selftest'
Jan 21 23:23:09 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:09.895+0000 7fb93ccc0140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 21 23:23:10 compute-0 ceph-mgr[74614]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 21 23:23:10 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'snap_schedule'
Jan 21 23:23:10 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:10.130+0000 7fb93ccc0140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 21 23:23:10 compute-0 ceph-mgr[74614]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 21 23:23:10 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'stats'
Jan 21 23:23:10 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:10.364+0000 7fb93ccc0140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 21 23:23:10 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'status'
Jan 21 23:23:10 compute-0 podman[74983]: 2026-01-21 23:23:10.700178423 +0000 UTC m=+0.045309711 container create abe02678c9403488a89ea61c54b9ea727212a9034d00d77e7b9b3f90ae936f9c (image=quay.io/ceph/ceph:v18, name=unruffled_leavitt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Jan 21 23:23:10 compute-0 systemd[1]: Started libpod-conmon-abe02678c9403488a89ea61c54b9ea727212a9034d00d77e7b9b3f90ae936f9c.scope.
Jan 21 23:23:10 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61eb53be7aa2ccdea86834cf9343e82991d8fb635e163a0da03cb69c6f931128/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61eb53be7aa2ccdea86834cf9343e82991d8fb635e163a0da03cb69c6f931128/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61eb53be7aa2ccdea86834cf9343e82991d8fb635e163a0da03cb69c6f931128/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:10 compute-0 podman[74983]: 2026-01-21 23:23:10.681929152 +0000 UTC m=+0.027060470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:10 compute-0 podman[74983]: 2026-01-21 23:23:10.788218463 +0000 UTC m=+0.133349801 container init abe02678c9403488a89ea61c54b9ea727212a9034d00d77e7b9b3f90ae936f9c (image=quay.io/ceph/ceph:v18, name=unruffled_leavitt, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:23:10 compute-0 podman[74983]: 2026-01-21 23:23:10.794522557 +0000 UTC m=+0.139653835 container start abe02678c9403488a89ea61c54b9ea727212a9034d00d77e7b9b3f90ae936f9c (image=quay.io/ceph/ceph:v18, name=unruffled_leavitt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Jan 21 23:23:10 compute-0 podman[74983]: 2026-01-21 23:23:10.798398686 +0000 UTC m=+0.143530014 container attach abe02678c9403488a89ea61c54b9ea727212a9034d00d77e7b9b3f90ae936f9c (image=quay.io/ceph/ceph:v18, name=unruffled_leavitt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 23:23:10 compute-0 ceph-mgr[74614]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 21 23:23:10 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'telegraf'
Jan 21 23:23:10 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:10.843+0000 7fb93ccc0140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 21 23:23:11 compute-0 ceph-mgr[74614]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 21 23:23:11 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'telemetry'
Jan 21 23:23:11 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:11.082+0000 7fb93ccc0140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 21 23:23:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 21 23:23:11 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2574248473' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]: 
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]: {
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     "fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     "health": {
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "status": "HEALTH_OK",
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "checks": {},
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "mutes": []
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     },
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     "election_epoch": 5,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     "quorum": [
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         0
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     ],
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     "quorum_names": [
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "compute-0"
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     ],
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     "quorum_age": 19,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     "monmap": {
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "epoch": 1,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "min_mon_release_name": "reef",
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "num_mons": 1
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     },
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     "osdmap": {
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "epoch": 1,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "num_osds": 0,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "num_up_osds": 0,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "osd_up_since": 0,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "num_in_osds": 0,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "osd_in_since": 0,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "num_remapped_pgs": 0
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     },
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     "pgmap": {
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "pgs_by_state": [],
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "num_pgs": 0,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "num_pools": 0,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "num_objects": 0,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "data_bytes": 0,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "bytes_used": 0,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "bytes_avail": 0,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "bytes_total": 0
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     },
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     "fsmap": {
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "epoch": 1,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "by_rank": [],
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "up:standby": 0
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     },
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     "mgrmap": {
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "available": false,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "num_standbys": 0,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "modules": [
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:             "iostat",
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:             "nfs",
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:             "restful"
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         ],
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "services": {}
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     },
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     "servicemap": {
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "epoch": 1,
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "modified": "2026-01-21T23:22:49.246100+0000",
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:         "services": {}
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     },
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]:     "progress_events": {}
Jan 21 23:23:11 compute-0 unruffled_leavitt[74999]: }
Jan 21 23:23:11 compute-0 systemd[1]: libpod-abe02678c9403488a89ea61c54b9ea727212a9034d00d77e7b9b3f90ae936f9c.scope: Deactivated successfully.
Jan 21 23:23:11 compute-0 podman[74983]: 2026-01-21 23:23:11.199834992 +0000 UTC m=+0.544966320 container died abe02678c9403488a89ea61c54b9ea727212a9034d00d77e7b9b3f90ae936f9c (image=quay.io/ceph/ceph:v18, name=unruffled_leavitt, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 21 23:23:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-61eb53be7aa2ccdea86834cf9343e82991d8fb635e163a0da03cb69c6f931128-merged.mount: Deactivated successfully.
Jan 21 23:23:11 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2574248473' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:23:11 compute-0 podman[74983]: 2026-01-21 23:23:11.252904825 +0000 UTC m=+0.598036113 container remove abe02678c9403488a89ea61c54b9ea727212a9034d00d77e7b9b3f90ae936f9c (image=quay.io/ceph/ceph:v18, name=unruffled_leavitt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 23:23:11 compute-0 systemd[1]: libpod-conmon-abe02678c9403488a89ea61c54b9ea727212a9034d00d77e7b9b3f90ae936f9c.scope: Deactivated successfully.
Jan 21 23:23:11 compute-0 ceph-mgr[74614]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 21 23:23:11 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'test_orchestrator'
Jan 21 23:23:11 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:11.643+0000 7fb93ccc0140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 21 23:23:12 compute-0 ceph-mgr[74614]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 21 23:23:12 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'volumes'
Jan 21 23:23:12 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:12.271+0000 7fb93ccc0140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 21 23:23:12 compute-0 ceph-mgr[74614]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 21 23:23:12 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'zabbix'
Jan 21 23:23:12 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:12.947+0000 7fb93ccc0140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 21 23:23:13 compute-0 ceph-mgr[74614]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 21 23:23:13 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:13.171+0000 7fb93ccc0140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 21 23:23:13 compute-0 ceph-mgr[74614]: ms_deliver_dispatch: unhandled message 0x563ed67e0f20 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 21 23:23:13 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.boqcsl
Jan 21 23:23:13 compute-0 podman[75043]: 2026-01-21 23:23:13.322354392 +0000 UTC m=+0.038867379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: mgr handle_mgr_map Activating!
Jan 21 23:23:14 compute-0 podman[75043]: 2026-01-21 23:23:14.646346878 +0000 UTC m=+1.362859775 container create f5b91ee401a233bd3f08f6191fefa57a8e126cdf4da52babbe6ec2cb3a34f734 (image=quay.io/ceph/ceph:v18, name=hopeful_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: mgr handle_mgr_map I am now activating
Jan 21 23:23:14 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.boqcsl(active, starting, since 1.47234s)
Jan 21 23:23:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 21 23:23:14 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 21 23:23:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e1 all = 1
Jan 21 23:23:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 21 23:23:14 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 21 23:23:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 21 23:23:14 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 21 23:23:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 21 23:23:14 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 21 23:23:14 compute-0 ceph-mon[74318]: Activating manager daemon compute-0.boqcsl
Jan 21 23:23:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.boqcsl", "id": "compute-0.boqcsl"} v 0) v1
Jan 21 23:23:14 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr metadata", "who": "compute-0.boqcsl", "id": "compute-0.boqcsl"}]: dispatch
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: balancer
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [balancer INFO root] Starting
Jan 21 23:23:14 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : Manager daemon compute-0.boqcsl is now available
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: crash
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:23:14
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [balancer INFO root] No pools available
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: devicehealth
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [devicehealth INFO root] Starting
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: iostat
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: nfs
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:14 compute-0 systemd[1]: Started libpod-conmon-f5b91ee401a233bd3f08f6191fefa57a8e126cdf4da52babbe6ec2cb3a34f734.scope.
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: orchestrator
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: pg_autoscaler
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: progress
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [progress INFO root] Loading...
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [progress INFO root] No stored events to load
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [progress INFO root] Loaded [] historic events
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [progress INFO root] Loaded OSDMap, ready.
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:14 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0fe04e9301a355585f562f6b1134068d4ded78567ba0cb0aafb2fb1492c562/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0fe04e9301a355585f562f6b1134068d4ded78567ba0cb0aafb2fb1492c562/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0fe04e9301a355585f562f6b1134068d4ded78567ba0cb0aafb2fb1492c562/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [rbd_support INFO root] recovery thread starting
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [rbd_support INFO root] starting setup
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: rbd_support
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: restful
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [restful INFO root] server_addr: :: server_port: 8003
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: status
Jan 21 23:23:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.boqcsl/mirror_snapshot_schedule"} v 0) v1
Jan 21 23:23:14 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.boqcsl/mirror_snapshot_schedule"}]: dispatch
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [restful WARNING root] server not running: no certificate configured
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: telemetry
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 21 23:23:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [rbd_support INFO root] PerfHandler: starting
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TaskHandler: starting
Jan 21 23:23:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.boqcsl/trash_purge_schedule"} v 0) v1
Jan 21 23:23:14 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.boqcsl/trash_purge_schedule"}]: dispatch
Jan 21 23:23:14 compute-0 podman[75043]: 2026-01-21 23:23:14.714546044 +0000 UTC m=+1.431058921 container init f5b91ee401a233bd3f08f6191fefa57a8e126cdf4da52babbe6ec2cb3a34f734 (image=quay.io/ceph/ceph:v18, name=hopeful_villani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:23:14 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: [rbd_support INFO root] setup complete
Jan 21 23:23:14 compute-0 podman[75043]: 2026-01-21 23:23:14.721182887 +0000 UTC m=+1.437695784 container start f5b91ee401a233bd3f08f6191fefa57a8e126cdf4da52babbe6ec2cb3a34f734 (image=quay.io/ceph/ceph:v18, name=hopeful_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 23:23:14 compute-0 podman[75043]: 2026-01-21 23:23:14.724973592 +0000 UTC m=+1.441486469 container attach f5b91ee401a233bd3f08f6191fefa57a8e126cdf4da52babbe6ec2cb3a34f734 (image=quay.io/ceph/ceph:v18, name=hopeful_villani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:23:14 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Jan 21 23:23:14 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: volumes
Jan 21 23:23:14 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 21 23:23:15 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1921389853' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:23:15 compute-0 hopeful_villani[75086]: 
Jan 21 23:23:15 compute-0 hopeful_villani[75086]: {
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     "fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     "health": {
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "status": "HEALTH_OK",
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "checks": {},
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "mutes": []
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     },
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     "election_epoch": 5,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     "quorum": [
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         0
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     ],
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     "quorum_names": [
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "compute-0"
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     ],
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     "quorum_age": 22,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     "monmap": {
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "epoch": 1,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "min_mon_release_name": "reef",
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "num_mons": 1
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     },
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     "osdmap": {
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "epoch": 1,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "num_osds": 0,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "num_up_osds": 0,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "osd_up_since": 0,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "num_in_osds": 0,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "osd_in_since": 0,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "num_remapped_pgs": 0
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     },
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     "pgmap": {
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "pgs_by_state": [],
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "num_pgs": 0,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "num_pools": 0,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "num_objects": 0,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "data_bytes": 0,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "bytes_used": 0,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "bytes_avail": 0,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "bytes_total": 0
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     },
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     "fsmap": {
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "epoch": 1,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "by_rank": [],
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "up:standby": 0
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     },
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     "mgrmap": {
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "available": false,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "num_standbys": 0,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "modules": [
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:             "iostat",
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:             "nfs",
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:             "restful"
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         ],
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "services": {}
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     },
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     "servicemap": {
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "epoch": 1,
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "modified": "2026-01-21T23:22:49.246100+0000",
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:         "services": {}
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     },
Jan 21 23:23:15 compute-0 hopeful_villani[75086]:     "progress_events": {}
Jan 21 23:23:15 compute-0 hopeful_villani[75086]: }
Jan 21 23:23:15 compute-0 systemd[1]: libpod-f5b91ee401a233bd3f08f6191fefa57a8e126cdf4da52babbe6ec2cb3a34f734.scope: Deactivated successfully.
Jan 21 23:23:15 compute-0 podman[75043]: 2026-01-21 23:23:15.138120234 +0000 UTC m=+1.854633171 container died f5b91ee401a233bd3f08f6191fefa57a8e126cdf4da52babbe6ec2cb3a34f734 (image=quay.io/ceph/ceph:v18, name=hopeful_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:23:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc0fe04e9301a355585f562f6b1134068d4ded78567ba0cb0aafb2fb1492c562-merged.mount: Deactivated successfully.
Jan 21 23:23:15 compute-0 podman[75043]: 2026-01-21 23:23:15.189520944 +0000 UTC m=+1.906033851 container remove f5b91ee401a233bd3f08f6191fefa57a8e126cdf4da52babbe6ec2cb3a34f734 (image=quay.io/ceph/ceph:v18, name=hopeful_villani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 21 23:23:15 compute-0 systemd[1]: libpod-conmon-f5b91ee401a233bd3f08f6191fefa57a8e126cdf4da52babbe6ec2cb3a34f734.scope: Deactivated successfully.
Jan 21 23:23:15 compute-0 ceph-mon[74318]: mgrmap e2: compute-0.boqcsl(active, starting, since 1.47234s)
Jan 21 23:23:15 compute-0 ceph-mon[74318]: from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 21 23:23:15 compute-0 ceph-mon[74318]: from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 21 23:23:15 compute-0 ceph-mon[74318]: from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 21 23:23:15 compute-0 ceph-mon[74318]: from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 21 23:23:15 compute-0 ceph-mon[74318]: from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr metadata", "who": "compute-0.boqcsl", "id": "compute-0.boqcsl"}]: dispatch
Jan 21 23:23:15 compute-0 ceph-mon[74318]: Manager daemon compute-0.boqcsl is now available
Jan 21 23:23:15 compute-0 ceph-mon[74318]: from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.boqcsl/mirror_snapshot_schedule"}]: dispatch
Jan 21 23:23:15 compute-0 ceph-mon[74318]: from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.boqcsl/trash_purge_schedule"}]: dispatch
Jan 21 23:23:15 compute-0 ceph-mon[74318]: from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:15 compute-0 ceph-mon[74318]: from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:15 compute-0 ceph-mon[74318]: from='mgr.14102 192.168.122.100:0/3038171995' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:15 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1921389853' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:23:15 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.boqcsl(active, since 2s)
Jan 21 23:23:16 compute-0 ceph-mgr[74614]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 23:23:16 compute-0 ceph-mon[74318]: mgrmap e3: compute-0.boqcsl(active, since 2s)
Jan 21 23:23:17 compute-0 podman[75178]: 2026-01-21 23:23:17.292443215 +0000 UTC m=+0.068763413 container create 26787bb791d402a071f4b320bfb995a992fc32331b9dc989c49c4faf11af8ee9 (image=quay.io/ceph/ceph:v18, name=beautiful_allen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 23:23:17 compute-0 systemd[1]: Started libpod-conmon-26787bb791d402a071f4b320bfb995a992fc32331b9dc989c49c4faf11af8ee9.scope.
Jan 21 23:23:17 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/730c9801a867e55c937160997802326bc5bb5f0a82c70d7da9b5c3d6f0ace1eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/730c9801a867e55c937160997802326bc5bb5f0a82c70d7da9b5c3d6f0ace1eb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/730c9801a867e55c937160997802326bc5bb5f0a82c70d7da9b5c3d6f0ace1eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:17 compute-0 podman[75178]: 2026-01-21 23:23:17.265165821 +0000 UTC m=+0.041486059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:17 compute-0 podman[75178]: 2026-01-21 23:23:17.371527113 +0000 UTC m=+0.147847301 container init 26787bb791d402a071f4b320bfb995a992fc32331b9dc989c49c4faf11af8ee9 (image=quay.io/ceph/ceph:v18, name=beautiful_allen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:23:17 compute-0 podman[75178]: 2026-01-21 23:23:17.376949969 +0000 UTC m=+0.153270127 container start 26787bb791d402a071f4b320bfb995a992fc32331b9dc989c49c4faf11af8ee9 (image=quay.io/ceph/ceph:v18, name=beautiful_allen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:23:17 compute-0 podman[75178]: 2026-01-21 23:23:17.380498057 +0000 UTC m=+0.156818225 container attach 26787bb791d402a071f4b320bfb995a992fc32331b9dc989c49c4faf11af8ee9 (image=quay.io/ceph/ceph:v18, name=beautiful_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 23:23:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 21 23:23:17 compute-0 beautiful_allen[75195]: 
Jan 21 23:23:17 compute-0 beautiful_allen[75195]: {
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     "fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     "health": {
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "status": "HEALTH_OK",
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "checks": {},
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "mutes": []
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     },
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     "election_epoch": 5,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     "quorum": [
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         0
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     ],
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     "quorum_names": [
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "compute-0"
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     ],
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     "quorum_age": 25,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     "monmap": {
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "epoch": 1,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "min_mon_release_name": "reef",
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "num_mons": 1
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     },
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     "osdmap": {
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "epoch": 1,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "num_osds": 0,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "num_up_osds": 0,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "osd_up_since": 0,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "num_in_osds": 0,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "osd_in_since": 0,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "num_remapped_pgs": 0
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     },
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     "pgmap": {
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "pgs_by_state": [],
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "num_pgs": 0,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "num_pools": 0,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "num_objects": 0,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "data_bytes": 0,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "bytes_used": 0,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "bytes_avail": 0,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "bytes_total": 0
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     },
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     "fsmap": {
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "epoch": 1,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "by_rank": [],
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "up:standby": 0
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     },
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     "mgrmap": {
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "available": true,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "num_standbys": 0,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "modules": [
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:             "iostat",
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:             "nfs",
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:             "restful"
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         ],
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "services": {}
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     },
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     "servicemap": {
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "epoch": 1,
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "modified": "2026-01-21T23:22:49.246100+0000",
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:         "services": {}
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     },
Jan 21 23:23:17 compute-0 beautiful_allen[75195]:     "progress_events": {}
Jan 21 23:23:17 compute-0 beautiful_allen[75195]: }
Jan 21 23:23:17 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3722571513' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:23:17 compute-0 systemd[1]: libpod-26787bb791d402a071f4b320bfb995a992fc32331b9dc989c49c4faf11af8ee9.scope: Deactivated successfully.
Jan 21 23:23:18 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3722571513' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 21 23:23:18 compute-0 podman[75221]: 2026-01-21 23:23:18.022928857 +0000 UTC m=+0.034073323 container died 26787bb791d402a071f4b320bfb995a992fc32331b9dc989c49c4faf11af8ee9 (image=quay.io/ceph/ceph:v18, name=beautiful_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:23:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-730c9801a867e55c937160997802326bc5bb5f0a82c70d7da9b5c3d6f0ace1eb-merged.mount: Deactivated successfully.
Jan 21 23:23:18 compute-0 podman[75221]: 2026-01-21 23:23:18.06195206 +0000 UTC m=+0.073096476 container remove 26787bb791d402a071f4b320bfb995a992fc32331b9dc989c49c4faf11af8ee9 (image=quay.io/ceph/ceph:v18, name=beautiful_allen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 23:23:18 compute-0 systemd[1]: libpod-conmon-26787bb791d402a071f4b320bfb995a992fc32331b9dc989c49c4faf11af8ee9.scope: Deactivated successfully.
Jan 21 23:23:18 compute-0 podman[75236]: 2026-01-21 23:23:18.163435413 +0000 UTC m=+0.065145423 container create 8a689b5959946db75b3bc842ac8a3152a8d39c5818c811fb3be9bfa9bb7bc435 (image=quay.io/ceph/ceph:v18, name=frosty_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 23:23:18 compute-0 systemd[1]: Started libpod-conmon-8a689b5959946db75b3bc842ac8a3152a8d39c5818c811fb3be9bfa9bb7bc435.scope.
Jan 21 23:23:18 compute-0 podman[75236]: 2026-01-21 23:23:18.137231491 +0000 UTC m=+0.038941551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:18 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e160a813ec926aa6f89106a36503e3702c3877e1412452f08944c599d12cea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e160a813ec926aa6f89106a36503e3702c3877e1412452f08944c599d12cea/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e160a813ec926aa6f89106a36503e3702c3877e1412452f08944c599d12cea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e160a813ec926aa6f89106a36503e3702c3877e1412452f08944c599d12cea/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:18 compute-0 podman[75236]: 2026-01-21 23:23:18.261240903 +0000 UTC m=+0.162950893 container init 8a689b5959946db75b3bc842ac8a3152a8d39c5818c811fb3be9bfa9bb7bc435 (image=quay.io/ceph/ceph:v18, name=frosty_rhodes, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:23:18 compute-0 podman[75236]: 2026-01-21 23:23:18.267451403 +0000 UTC m=+0.169161373 container start 8a689b5959946db75b3bc842ac8a3152a8d39c5818c811fb3be9bfa9bb7bc435 (image=quay.io/ceph/ceph:v18, name=frosty_rhodes, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:23:18 compute-0 podman[75236]: 2026-01-21 23:23:18.270512497 +0000 UTC m=+0.172222467 container attach 8a689b5959946db75b3bc842ac8a3152a8d39c5818c811fb3be9bfa9bb7bc435 (image=quay.io/ceph/ceph:v18, name=frosty_rhodes, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:23:18 compute-0 ceph-mgr[74614]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 23:23:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 21 23:23:18 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/101638893' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 21 23:23:18 compute-0 systemd[1]: libpod-8a689b5959946db75b3bc842ac8a3152a8d39c5818c811fb3be9bfa9bb7bc435.scope: Deactivated successfully.
Jan 21 23:23:18 compute-0 podman[75236]: 2026-01-21 23:23:18.876270825 +0000 UTC m=+0.777980795 container died 8a689b5959946db75b3bc842ac8a3152a8d39c5818c811fb3be9bfa9bb7bc435 (image=quay.io/ceph/ceph:v18, name=frosty_rhodes, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:23:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0e160a813ec926aa6f89106a36503e3702c3877e1412452f08944c599d12cea-merged.mount: Deactivated successfully.
Jan 21 23:23:18 compute-0 podman[75236]: 2026-01-21 23:23:18.919879819 +0000 UTC m=+0.821589779 container remove 8a689b5959946db75b3bc842ac8a3152a8d39c5818c811fb3be9bfa9bb7bc435 (image=quay.io/ceph/ceph:v18, name=frosty_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 23:23:18 compute-0 systemd[1]: libpod-conmon-8a689b5959946db75b3bc842ac8a3152a8d39c5818c811fb3be9bfa9bb7bc435.scope: Deactivated successfully.
Jan 21 23:23:18 compute-0 podman[75290]: 2026-01-21 23:23:18.989373613 +0000 UTC m=+0.050767603 container create 455738038a710ecc483ed5c75e44230ea52c6d5c64df03b1c611467b772aa55c (image=quay.io/ceph/ceph:v18, name=condescending_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:23:19 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/101638893' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 21 23:23:19 compute-0 systemd[1]: Started libpod-conmon-455738038a710ecc483ed5c75e44230ea52c6d5c64df03b1c611467b772aa55c.scope.
Jan 21 23:23:19 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/925b269af59008d112a5735aefefd83a7f962bad668efe03029408baea8e7d15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/925b269af59008d112a5735aefefd83a7f962bad668efe03029408baea8e7d15/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/925b269af59008d112a5735aefefd83a7f962bad668efe03029408baea8e7d15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:19 compute-0 podman[75290]: 2026-01-21 23:23:18.961225863 +0000 UTC m=+0.022619883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:19 compute-0 podman[75290]: 2026-01-21 23:23:19.05989489 +0000 UTC m=+0.121288910 container init 455738038a710ecc483ed5c75e44230ea52c6d5c64df03b1c611467b772aa55c (image=quay.io/ceph/ceph:v18, name=condescending_feynman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 23:23:19 compute-0 podman[75290]: 2026-01-21 23:23:19.066401298 +0000 UTC m=+0.127795288 container start 455738038a710ecc483ed5c75e44230ea52c6d5c64df03b1c611467b772aa55c (image=quay.io/ceph/ceph:v18, name=condescending_feynman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:23:19 compute-0 podman[75290]: 2026-01-21 23:23:19.069968127 +0000 UTC m=+0.131362137 container attach 455738038a710ecc483ed5c75e44230ea52c6d5c64df03b1c611467b772aa55c (image=quay.io/ceph/ceph:v18, name=condescending_feynman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 21 23:23:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Jan 21 23:23:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/162907931' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 21 23:23:20 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/162907931' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 21 23:23:20 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/162907931' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr respawn  1: '-n'
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr respawn  2: 'mgr.compute-0.boqcsl'
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr respawn  3: '-f'
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr respawn  4: '--setuser'
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr respawn  5: 'ceph'
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr respawn  6: '--setgroup'
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr respawn  7: 'ceph'
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr respawn  8: '--default-log-to-file=false'
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr respawn  9: '--default-log-to-journald=true'
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr respawn  exe_path /proc/self/exe
Jan 21 23:23:20 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.boqcsl(active, since 6s)
Jan 21 23:23:20 compute-0 systemd[1]: libpod-455738038a710ecc483ed5c75e44230ea52c6d5c64df03b1c611467b772aa55c.scope: Deactivated successfully.
Jan 21 23:23:20 compute-0 conmon[75306]: conmon 455738038a710ecc483e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-455738038a710ecc483ed5c75e44230ea52c6d5c64df03b1c611467b772aa55c.scope/container/memory.events
Jan 21 23:23:20 compute-0 podman[75290]: 2026-01-21 23:23:20.057029814 +0000 UTC m=+1.118423814 container died 455738038a710ecc483ed5c75e44230ea52c6d5c64df03b1c611467b772aa55c (image=quay.io/ceph/ceph:v18, name=condescending_feynman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:23:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-925b269af59008d112a5735aefefd83a7f962bad668efe03029408baea8e7d15-merged.mount: Deactivated successfully.
Jan 21 23:23:20 compute-0 podman[75290]: 2026-01-21 23:23:20.101084191 +0000 UTC m=+1.162478221 container remove 455738038a710ecc483ed5c75e44230ea52c6d5c64df03b1c611467b772aa55c (image=quay.io/ceph/ceph:v18, name=condescending_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:23:20 compute-0 systemd[1]: libpod-conmon-455738038a710ecc483ed5c75e44230ea52c6d5c64df03b1c611467b772aa55c.scope: Deactivated successfully.
Jan 21 23:23:20 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: ignoring --setuser ceph since I am not root
Jan 21 23:23:20 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: ignoring --setgroup ceph since I am not root
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: pidfile_write: ignore empty --pid-file
Jan 21 23:23:20 compute-0 podman[75344]: 2026-01-21 23:23:20.159703293 +0000 UTC m=+0.040056876 container create e62d0fa75595cd3d38e58b55f999e674049d981aa424033cf27f697d66d47058 (image=quay.io/ceph/ceph:v18, name=reverent_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 21 23:23:20 compute-0 systemd[1]: Started libpod-conmon-e62d0fa75595cd3d38e58b55f999e674049d981aa424033cf27f697d66d47058.scope.
Jan 21 23:23:20 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8099759df17a86abdfbf688029cca5925df95b6faf25b3e1b770d2282a970562/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8099759df17a86abdfbf688029cca5925df95b6faf25b3e1b770d2282a970562/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8099759df17a86abdfbf688029cca5925df95b6faf25b3e1b770d2282a970562/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:20 compute-0 podman[75344]: 2026-01-21 23:23:20.238783031 +0000 UTC m=+0.119136634 container init e62d0fa75595cd3d38e58b55f999e674049d981aa424033cf27f697d66d47058 (image=quay.io/ceph/ceph:v18, name=reverent_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 21 23:23:20 compute-0 podman[75344]: 2026-01-21 23:23:20.141579518 +0000 UTC m=+0.021933131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:20 compute-0 podman[75344]: 2026-01-21 23:23:20.247638792 +0000 UTC m=+0.127992415 container start e62d0fa75595cd3d38e58b55f999e674049d981aa424033cf27f697d66d47058 (image=quay.io/ceph/ceph:v18, name=reverent_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:23:20 compute-0 podman[75344]: 2026-01-21 23:23:20.251383085 +0000 UTC m=+0.131736688 container attach e62d0fa75595cd3d38e58b55f999e674049d981aa424033cf27f697d66d47058 (image=quay.io/ceph/ceph:v18, name=reverent_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'alerts'
Jan 21 23:23:20 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:20.576+0000 7fbfb606f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'balancer'
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 21 23:23:20 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'cephadm'
Jan 21 23:23:20 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:20.808+0000 7fbfb606f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 21 23:23:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 21 23:23:20 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1223489521' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 21 23:23:20 compute-0 reverent_mclean[75385]: {
Jan 21 23:23:20 compute-0 reverent_mclean[75385]:     "epoch": 4,
Jan 21 23:23:20 compute-0 reverent_mclean[75385]:     "available": true,
Jan 21 23:23:20 compute-0 reverent_mclean[75385]:     "active_name": "compute-0.boqcsl",
Jan 21 23:23:20 compute-0 reverent_mclean[75385]:     "num_standby": 0
Jan 21 23:23:20 compute-0 reverent_mclean[75385]: }
Jan 21 23:23:20 compute-0 systemd[1]: libpod-e62d0fa75595cd3d38e58b55f999e674049d981aa424033cf27f697d66d47058.scope: Deactivated successfully.
Jan 21 23:23:20 compute-0 conmon[75385]: conmon e62d0fa75595cd3d38e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e62d0fa75595cd3d38e58b55f999e674049d981aa424033cf27f697d66d47058.scope/container/memory.events
Jan 21 23:23:20 compute-0 podman[75344]: 2026-01-21 23:23:20.82948985 +0000 UTC m=+0.709843473 container died e62d0fa75595cd3d38e58b55f999e674049d981aa424033cf27f697d66d47058 (image=quay.io/ceph/ceph:v18, name=reverent_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 21 23:23:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-8099759df17a86abdfbf688029cca5925df95b6faf25b3e1b770d2282a970562-merged.mount: Deactivated successfully.
Jan 21 23:23:20 compute-0 podman[75344]: 2026-01-21 23:23:20.885876894 +0000 UTC m=+0.766230517 container remove e62d0fa75595cd3d38e58b55f999e674049d981aa424033cf27f697d66d47058 (image=quay.io/ceph/ceph:v18, name=reverent_mclean, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 21 23:23:20 compute-0 systemd[1]: libpod-conmon-e62d0fa75595cd3d38e58b55f999e674049d981aa424033cf27f697d66d47058.scope: Deactivated successfully.
Jan 21 23:23:20 compute-0 podman[75423]: 2026-01-21 23:23:20.969212441 +0000 UTC m=+0.057274402 container create afc040bc022e3dc0b82fb19235869bb965488565671b26ff595bcd724d99ed96 (image=quay.io/ceph/ceph:v18, name=elated_neumann, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 23:23:21 compute-0 systemd[1]: Started libpod-conmon-afc040bc022e3dc0b82fb19235869bb965488565671b26ff595bcd724d99ed96.scope.
Jan 21 23:23:21 compute-0 podman[75423]: 2026-01-21 23:23:20.942230596 +0000 UTC m=+0.030292607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/162907931' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 21 23:23:21 compute-0 ceph-mon[74318]: mgrmap e4: compute-0.boqcsl(active, since 6s)
Jan 21 23:23:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1223489521' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 21 23:23:21 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339c93275a6a853206ccdccbfa2a06cb125f28075b38df6681c0d3ca07e9194b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339c93275a6a853206ccdccbfa2a06cb125f28075b38df6681c0d3ca07e9194b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339c93275a6a853206ccdccbfa2a06cb125f28075b38df6681c0d3ca07e9194b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:21 compute-0 podman[75423]: 2026-01-21 23:23:21.070005563 +0000 UTC m=+0.158067554 container init afc040bc022e3dc0b82fb19235869bb965488565671b26ff595bcd724d99ed96 (image=quay.io/ceph/ceph:v18, name=elated_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 21 23:23:21 compute-0 podman[75423]: 2026-01-21 23:23:21.074423827 +0000 UTC m=+0.162485798 container start afc040bc022e3dc0b82fb19235869bb965488565671b26ff595bcd724d99ed96 (image=quay.io/ceph/ceph:v18, name=elated_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:23:21 compute-0 podman[75423]: 2026-01-21 23:23:21.134316459 +0000 UTC m=+0.222378470 container attach afc040bc022e3dc0b82fb19235869bb965488565671b26ff595bcd724d99ed96 (image=quay.io/ceph/ceph:v18, name=elated_neumann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:23:22 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'crash'
Jan 21 23:23:23 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:23.019+0000 7fbfb606f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 21 23:23:23 compute-0 ceph-mgr[74614]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 21 23:23:23 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'dashboard'
Jan 21 23:23:24 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'devicehealth'
Jan 21 23:23:24 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:24.687+0000 7fbfb606f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 21 23:23:24 compute-0 ceph-mgr[74614]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 21 23:23:24 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'diskprediction_local'
Jan 21 23:23:25 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 21 23:23:25 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 21 23:23:25 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]:   from numpy import show_config as show_numpy_config
Jan 21 23:23:25 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:25.205+0000 7fbfb606f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 21 23:23:25 compute-0 ceph-mgr[74614]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 21 23:23:25 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'influx'
Jan 21 23:23:25 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:25.425+0000 7fbfb606f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 21 23:23:25 compute-0 ceph-mgr[74614]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 21 23:23:25 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'insights'
Jan 21 23:23:25 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'iostat'
Jan 21 23:23:25 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:25.873+0000 7fbfb606f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 21 23:23:25 compute-0 ceph-mgr[74614]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 21 23:23:25 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'k8sevents'
Jan 21 23:23:27 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'localpool'
Jan 21 23:23:27 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'mds_autoscaler'
Jan 21 23:23:28 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'mirroring'
Jan 21 23:23:28 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'nfs'
Jan 21 23:23:29 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:29.312+0000 7fbfb606f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 21 23:23:29 compute-0 ceph-mgr[74614]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 21 23:23:29 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'orchestrator'
Jan 21 23:23:30 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:30.010+0000 7fbfb606f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 21 23:23:30 compute-0 ceph-mgr[74614]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 21 23:23:30 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'osd_perf_query'
Jan 21 23:23:30 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:30.312+0000 7fbfb606f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 21 23:23:30 compute-0 ceph-mgr[74614]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 21 23:23:30 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'osd_support'
Jan 21 23:23:30 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:30.550+0000 7fbfb606f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 21 23:23:30 compute-0 ceph-mgr[74614]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 21 23:23:30 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'pg_autoscaler'
Jan 21 23:23:30 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:30.819+0000 7fbfb606f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 21 23:23:30 compute-0 ceph-mgr[74614]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 21 23:23:30 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'progress'
Jan 21 23:23:31 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:31.064+0000 7fbfb606f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 21 23:23:31 compute-0 ceph-mgr[74614]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 21 23:23:31 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'prometheus'
Jan 21 23:23:32 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:32.008+0000 7fbfb606f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 21 23:23:32 compute-0 ceph-mgr[74614]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 21 23:23:32 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'rbd_support'
Jan 21 23:23:32 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:32.298+0000 7fbfb606f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 21 23:23:32 compute-0 ceph-mgr[74614]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 21 23:23:32 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'restful'
Jan 21 23:23:32 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'rgw'
Jan 21 23:23:33 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:33.688+0000 7fbfb606f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 21 23:23:33 compute-0 ceph-mgr[74614]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 21 23:23:33 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'rook'
Jan 21 23:23:35 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:35.795+0000 7fbfb606f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 21 23:23:35 compute-0 ceph-mgr[74614]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 21 23:23:35 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'selftest'
Jan 21 23:23:36 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:36.037+0000 7fbfb606f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 21 23:23:36 compute-0 ceph-mgr[74614]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 21 23:23:36 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'snap_schedule'
Jan 21 23:23:36 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:36.276+0000 7fbfb606f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 21 23:23:36 compute-0 ceph-mgr[74614]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 21 23:23:36 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'stats'
Jan 21 23:23:36 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'status'
Jan 21 23:23:36 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:36.759+0000 7fbfb606f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 21 23:23:36 compute-0 ceph-mgr[74614]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 21 23:23:36 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'telegraf'
Jan 21 23:23:36 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:36.996+0000 7fbfb606f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 21 23:23:36 compute-0 ceph-mgr[74614]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 21 23:23:36 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'telemetry'
Jan 21 23:23:37 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:37.579+0000 7fbfb606f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 21 23:23:37 compute-0 ceph-mgr[74614]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 21 23:23:37 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'test_orchestrator'
Jan 21 23:23:38 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:38.216+0000 7fbfb606f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 21 23:23:38 compute-0 ceph-mgr[74614]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 21 23:23:38 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'volumes'
Jan 21 23:23:38 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:38.860+0000 7fbfb606f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 21 23:23:38 compute-0 ceph-mgr[74614]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 21 23:23:38 compute-0 ceph-mgr[74614]: mgr[py] Loading python module 'zabbix'
Jan 21 23:23:39 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:23:39.104+0000 7fbfb606f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 21 23:23:39 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : Active manager daemon compute-0.boqcsl restarted
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: ms_deliver_dispatch: unhandled message 0x55cbd1a46420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 21 23:23:39 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.boqcsl
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: mgr handle_mgr_map Activating!
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: mgr handle_mgr_map I am now activating
Jan 21 23:23:39 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 21 23:23:39 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.boqcsl(active, starting, since 0.027642s)
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 21 23:23:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.boqcsl", "id": "compute-0.boqcsl"} v 0) v1
Jan 21 23:23:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr metadata", "who": "compute-0.boqcsl", "id": "compute-0.boqcsl"}]: dispatch
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 21 23:23:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e1 all = 1
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 21 23:23:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 21 23:23:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: balancer
Jan 21 23:23:39 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : Manager daemon compute-0.boqcsl is now available
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Starting
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:23:39
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [balancer INFO root] No pools available
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Jan 21 23:23:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Jan 21 23:23:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: cephadm
Jan 21 23:23:39 compute-0 ceph-mon[74318]: Active manager daemon compute-0.boqcsl restarted
Jan 21 23:23:39 compute-0 ceph-mon[74318]: Activating manager daemon compute-0.boqcsl
Jan 21 23:23:39 compute-0 ceph-mon[74318]: osdmap e2: 0 total, 0 up, 0 in
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: crash
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mgrmap e5: compute-0.boqcsl(active, starting, since 0.027642s)
Jan 21 23:23:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 21 23:23:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr metadata", "who": "compute-0.boqcsl", "id": "compute-0.boqcsl"}]: dispatch
Jan 21 23:23:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 21 23:23:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 21 23:23:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 21 23:23:39 compute-0 ceph-mon[74318]: Manager daemon compute-0.boqcsl is now available
Jan 21 23:23:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: devicehealth
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: iostat
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: nfs
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: orchestrator
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [devicehealth INFO root] Starting
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 21 23:23:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: pg_autoscaler
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: progress
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [progress INFO root] Loading...
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [progress INFO root] No stored events to load
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [progress INFO root] Loaded [] historic events
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [progress INFO root] Loaded OSDMap, ready.
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 21 23:23:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] recovery thread starting
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] starting setup
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: rbd_support
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: restful
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [restful INFO root] server_addr: :: server_port: 8003
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: status
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [restful WARNING root] server not running: no certificate configured
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.boqcsl/mirror_snapshot_schedule"} v 0) v1
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.boqcsl/mirror_snapshot_schedule"}]: dispatch
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: telemetry
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] PerfHandler: starting
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TaskHandler: starting
Jan 21 23:23:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.boqcsl/trash_purge_schedule"} v 0) v1
Jan 21 23:23:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.boqcsl/trash_purge_schedule"}]: dispatch
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] setup complete
Jan 21 23:23:39 compute-0 ceph-mgr[74614]: mgr load Constructed class from module: volumes
Jan 21 23:23:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Jan 21 23:23:40 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Jan 21 23:23:40 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:40 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.boqcsl(active, since 1.03547s)
Jan 21 23:23:40 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 21 23:23:40 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 21 23:23:40 compute-0 elated_neumann[75439]: {
Jan 21 23:23:40 compute-0 elated_neumann[75439]:     "mgrmap_epoch": 6,
Jan 21 23:23:40 compute-0 elated_neumann[75439]:     "initialized": true
Jan 21 23:23:40 compute-0 elated_neumann[75439]: }
Jan 21 23:23:40 compute-0 systemd[1]: libpod-afc040bc022e3dc0b82fb19235869bb965488565671b26ff595bcd724d99ed96.scope: Deactivated successfully.
Jan 21 23:23:40 compute-0 podman[75423]: 2026-01-21 23:23:40.173657399 +0000 UTC m=+19.261719340 container died afc040bc022e3dc0b82fb19235869bb965488565671b26ff595bcd724d99ed96 (image=quay.io/ceph/ceph:v18, name=elated_neumann, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:23:40 compute-0 ceph-mon[74318]: Found migration_current of "None". Setting to last migration.
Jan 21 23:23:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 23:23:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 23:23:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.boqcsl/mirror_snapshot_schedule"}]: dispatch
Jan 21 23:23:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.boqcsl/trash_purge_schedule"}]: dispatch
Jan 21 23:23:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:40 compute-0 ceph-mon[74318]: mgrmap e6: compute-0.boqcsl(active, since 1.03547s)
Jan 21 23:23:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-339c93275a6a853206ccdccbfa2a06cb125f28075b38df6681c0d3ca07e9194b-merged.mount: Deactivated successfully.
Jan 21 23:23:40 compute-0 podman[75423]: 2026-01-21 23:23:40.224767662 +0000 UTC m=+19.312829593 container remove afc040bc022e3dc0b82fb19235869bb965488565671b26ff595bcd724d99ed96 (image=quay.io/ceph/ceph:v18, name=elated_neumann, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:23:40 compute-0 systemd[1]: libpod-conmon-afc040bc022e3dc0b82fb19235869bb965488565671b26ff595bcd724d99ed96.scope: Deactivated successfully.
Jan 21 23:23:40 compute-0 podman[75598]: 2026-01-21 23:23:40.304192199 +0000 UTC m=+0.046648037 container create 7a730e82557f03e7358b74650d9334af93de47256aabe32fbaf5bb1557013734 (image=quay.io/ceph/ceph:v18, name=elastic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 23:23:40 compute-0 systemd[1]: Started libpod-conmon-7a730e82557f03e7358b74650d9334af93de47256aabe32fbaf5bb1557013734.scope.
Jan 21 23:23:40 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a44f64073cd01c031ca0259bfc8ee7d1eba2800ced1797bd196f3a84ca1620fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a44f64073cd01c031ca0259bfc8ee7d1eba2800ced1797bd196f3a84ca1620fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a44f64073cd01c031ca0259bfc8ee7d1eba2800ced1797bd196f3a84ca1620fb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:40 compute-0 podman[75598]: 2026-01-21 23:23:40.380216573 +0000 UTC m=+0.122672431 container init 7a730e82557f03e7358b74650d9334af93de47256aabe32fbaf5bb1557013734 (image=quay.io/ceph/ceph:v18, name=elastic_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:23:40 compute-0 podman[75598]: 2026-01-21 23:23:40.287810308 +0000 UTC m=+0.030266166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:40 compute-0 podman[75598]: 2026-01-21 23:23:40.388324541 +0000 UTC m=+0.130780379 container start 7a730e82557f03e7358b74650d9334af93de47256aabe32fbaf5bb1557013734 (image=quay.io/ceph/ceph:v18, name=elastic_hermann, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:23:40 compute-0 podman[75598]: 2026-01-21 23:23:40.403617859 +0000 UTC m=+0.146073717 container attach 7a730e82557f03e7358b74650d9334af93de47256aabe32fbaf5bb1557013734 (image=quay.io/ceph/ceph:v18, name=elastic_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 21 23:23:40 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Jan 21 23:23:40 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 21 23:23:40 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 23:23:41 compute-0 systemd[1]: libpod-7a730e82557f03e7358b74650d9334af93de47256aabe32fbaf5bb1557013734.scope: Deactivated successfully.
Jan 21 23:23:41 compute-0 podman[75598]: 2026-01-21 23:23:41.008381548 +0000 UTC m=+0.750837386 container died 7a730e82557f03e7358b74650d9334af93de47256aabe32fbaf5bb1557013734 (image=quay.io/ceph/ceph:v18, name=elastic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:23:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a44f64073cd01c031ca0259bfc8ee7d1eba2800ced1797bd196f3a84ca1620fb-merged.mount: Deactivated successfully.
Jan 21 23:23:41 compute-0 podman[75598]: 2026-01-21 23:23:41.049453043 +0000 UTC m=+0.791908881 container remove 7a730e82557f03e7358b74650d9334af93de47256aabe32fbaf5bb1557013734 (image=quay.io/ceph/ceph:v18, name=elastic_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 23:23:41 compute-0 systemd[1]: libpod-conmon-7a730e82557f03e7358b74650d9334af93de47256aabe32fbaf5bb1557013734.scope: Deactivated successfully.
Jan 21 23:23:41 compute-0 podman[75652]: 2026-01-21 23:23:41.124028014 +0000 UTC m=+0.044094590 container create db28e659f21e7c39c50ff03dd360d60256da3ce0b0a4980c8dcf76780895911f (image=quay.io/ceph/ceph:v18, name=elegant_vaughan, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 23:23:41 compute-0 systemd[1]: Started libpod-conmon-db28e659f21e7c39c50ff03dd360d60256da3ce0b0a4980c8dcf76780895911f.scope.
Jan 21 23:23:41 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082bb0e3a57a0faa74d3cf9e101da4661bbd63bc21b9d87b29073e485ebfc246/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082bb0e3a57a0faa74d3cf9e101da4661bbd63bc21b9d87b29073e485ebfc246/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082bb0e3a57a0faa74d3cf9e101da4661bbd63bc21b9d87b29073e485ebfc246/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:41 compute-0 podman[75652]: 2026-01-21 23:23:41.106441856 +0000 UTC m=+0.026508462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:41 compute-0 podman[75652]: 2026-01-21 23:23:41.216291525 +0000 UTC m=+0.136358101 container init db28e659f21e7c39c50ff03dd360d60256da3ce0b0a4980c8dcf76780895911f (image=quay.io/ceph/ceph:v18, name=elegant_vaughan, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:23:41 compute-0 podman[75652]: 2026-01-21 23:23:41.222287307 +0000 UTC m=+0.142353883 container start db28e659f21e7c39c50ff03dd360d60256da3ce0b0a4980c8dcf76780895911f (image=quay.io/ceph/ceph:v18, name=elegant_vaughan, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:23:41 compute-0 podman[75652]: 2026-01-21 23:23:41.226088003 +0000 UTC m=+0.146154579 container attach db28e659f21e7c39c50ff03dd360d60256da3ce0b0a4980c8dcf76780895911f (image=quay.io/ceph/ceph:v18, name=elegant_vaughan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: [cephadm INFO cherrypy.error] [21/Jan/2026:23:23:41] ENGINE Bus STARTING
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : [21/Jan/2026:23:23:41] ENGINE Bus STARTING
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: [cephadm INFO cherrypy.error] [21/Jan/2026:23:23:41] ENGINE Serving on http://192.168.122.100:8765
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : [21/Jan/2026:23:23:41] ENGINE Serving on http://192.168.122.100:8765
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: [cephadm INFO cherrypy.error] [21/Jan/2026:23:23:41] ENGINE Serving on https://192.168.122.100:7150
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : [21/Jan/2026:23:23:41] ENGINE Serving on https://192.168.122.100:7150
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: [cephadm INFO cherrypy.error] [21/Jan/2026:23:23:41] ENGINE Bus STARTED
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : [21/Jan/2026:23:23:41] ENGINE Bus STARTED
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: [cephadm INFO cherrypy.error] [21/Jan/2026:23:23:41] ENGINE Client ('192.168.122.100', 43724) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : [21/Jan/2026:23:23:41] ENGINE Client ('192.168.122.100', 43724) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 23:23:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 21 23:23:41 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Jan 21 23:23:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: [cephadm INFO root] Set ssh ssh_user
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 21 23:23:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Jan 21 23:23:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: [cephadm INFO root] Set ssh ssh_config
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 21 23:23:41 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 21 23:23:41 compute-0 elegant_vaughan[75668]: ssh user set to ceph-admin. sudo will be used
Jan 21 23:23:41 compute-0 systemd[1]: libpod-db28e659f21e7c39c50ff03dd360d60256da3ce0b0a4980c8dcf76780895911f.scope: Deactivated successfully.
Jan 21 23:23:41 compute-0 podman[75652]: 2026-01-21 23:23:41.783149194 +0000 UTC m=+0.703215770 container died db28e659f21e7c39c50ff03dd360d60256da3ce0b0a4980c8dcf76780895911f (image=quay.io/ceph/ceph:v18, name=elegant_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:23:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-082bb0e3a57a0faa74d3cf9e101da4661bbd63bc21b9d87b29073e485ebfc246-merged.mount: Deactivated successfully.
Jan 21 23:23:41 compute-0 podman[75652]: 2026-01-21 23:23:41.830655906 +0000 UTC m=+0.750722492 container remove db28e659f21e7c39c50ff03dd360d60256da3ce0b0a4980c8dcf76780895911f (image=quay.io/ceph/ceph:v18, name=elegant_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 21 23:23:41 compute-0 systemd[1]: libpod-conmon-db28e659f21e7c39c50ff03dd360d60256da3ce0b0a4980c8dcf76780895911f.scope: Deactivated successfully.
Jan 21 23:23:41 compute-0 podman[75731]: 2026-01-21 23:23:41.901633296 +0000 UTC m=+0.047847963 container create 657dacf92579535eda75539721c4bb0c1547c38d697a1f1453a9895db2e28b32 (image=quay.io/ceph/ceph:v18, name=hungry_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 23:23:41 compute-0 systemd[1]: Started libpod-conmon-657dacf92579535eda75539721c4bb0c1547c38d697a1f1453a9895db2e28b32.scope.
Jan 21 23:23:41 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00867448153ef64b4c3815e4217a716bcf58f31113bb05a11b34211eddcb90c3/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00867448153ef64b4c3815e4217a716bcf58f31113bb05a11b34211eddcb90c3/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00867448153ef64b4c3815e4217a716bcf58f31113bb05a11b34211eddcb90c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00867448153ef64b4c3815e4217a716bcf58f31113bb05a11b34211eddcb90c3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00867448153ef64b4c3815e4217a716bcf58f31113bb05a11b34211eddcb90c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:41 compute-0 podman[75731]: 2026-01-21 23:23:41.97338165 +0000 UTC m=+0.119596347 container init 657dacf92579535eda75539721c4bb0c1547c38d697a1f1453a9895db2e28b32 (image=quay.io/ceph/ceph:v18, name=hungry_bell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 21 23:23:41 compute-0 podman[75731]: 2026-01-21 23:23:41.879875862 +0000 UTC m=+0.026090559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:41 compute-0 ceph-mon[74318]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 21 23:23:41 compute-0 ceph-mon[74318]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 21 23:23:41 compute-0 ceph-mon[74318]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 23:23:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 23:23:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:41 compute-0 podman[75731]: 2026-01-21 23:23:41.983981213 +0000 UTC m=+0.130195930 container start 657dacf92579535eda75539721c4bb0c1547c38d697a1f1453a9895db2e28b32 (image=quay.io/ceph/ceph:v18, name=hungry_bell, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 23:23:41 compute-0 podman[75731]: 2026-01-21 23:23:41.988842343 +0000 UTC m=+0.135057020 container attach 657dacf92579535eda75539721c4bb0c1547c38d697a1f1453a9895db2e28b32 (image=quay.io/ceph/ceph:v18, name=hungry_bell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 23:23:41 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.boqcsl(active, since 2s)
Jan 21 23:23:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019915307 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:23:42 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Jan 21 23:23:42 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:42 compute-0 ceph-mgr[74614]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 21 23:23:42 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 21 23:23:42 compute-0 ceph-mgr[74614]: [cephadm INFO root] Set ssh private key
Jan 21 23:23:42 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 21 23:23:42 compute-0 systemd[1]: libpod-657dacf92579535eda75539721c4bb0c1547c38d697a1f1453a9895db2e28b32.scope: Deactivated successfully.
Jan 21 23:23:42 compute-0 podman[75731]: 2026-01-21 23:23:42.562276403 +0000 UTC m=+0.708491080 container died 657dacf92579535eda75539721c4bb0c1547c38d697a1f1453a9895db2e28b32 (image=quay.io/ceph/ceph:v18, name=hungry_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 23:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-00867448153ef64b4c3815e4217a716bcf58f31113bb05a11b34211eddcb90c3-merged.mount: Deactivated successfully.
Jan 21 23:23:42 compute-0 podman[75731]: 2026-01-21 23:23:42.595183329 +0000 UTC m=+0.741398006 container remove 657dacf92579535eda75539721c4bb0c1547c38d697a1f1453a9895db2e28b32 (image=quay.io/ceph/ceph:v18, name=hungry_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 21 23:23:42 compute-0 systemd[1]: libpod-conmon-657dacf92579535eda75539721c4bb0c1547c38d697a1f1453a9895db2e28b32.scope: Deactivated successfully.
Jan 21 23:23:42 compute-0 podman[75782]: 2026-01-21 23:23:42.667761208 +0000 UTC m=+0.047608826 container create 09d4721e314034782e9eae5df8ee9d01d77c1d697d051b5d8b4297a27cb2a890 (image=quay.io/ceph/ceph:v18, name=interesting_goldwasser, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:23:42 compute-0 systemd[1]: Started libpod-conmon-09d4721e314034782e9eae5df8ee9d01d77c1d697d051b5d8b4297a27cb2a890.scope.
Jan 21 23:23:42 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d86cc8f88bfb7798719dc2afa9770b76c3d16a0e9162e6352538321fc7f3795/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d86cc8f88bfb7798719dc2afa9770b76c3d16a0e9162e6352538321fc7f3795/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d86cc8f88bfb7798719dc2afa9770b76c3d16a0e9162e6352538321fc7f3795/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d86cc8f88bfb7798719dc2afa9770b76c3d16a0e9162e6352538321fc7f3795/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d86cc8f88bfb7798719dc2afa9770b76c3d16a0e9162e6352538321fc7f3795/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:42 compute-0 podman[75782]: 2026-01-21 23:23:42.738678116 +0000 UTC m=+0.118525774 container init 09d4721e314034782e9eae5df8ee9d01d77c1d697d051b5d8b4297a27cb2a890 (image=quay.io/ceph/ceph:v18, name=interesting_goldwasser, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 23:23:42 compute-0 podman[75782]: 2026-01-21 23:23:42.647171828 +0000 UTC m=+0.027019466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:42 compute-0 podman[75782]: 2026-01-21 23:23:42.747780335 +0000 UTC m=+0.127627953 container start 09d4721e314034782e9eae5df8ee9d01d77c1d697d051b5d8b4297a27cb2a890 (image=quay.io/ceph/ceph:v18, name=interesting_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 21 23:23:42 compute-0 podman[75782]: 2026-01-21 23:23:42.750837908 +0000 UTC m=+0.130685616 container attach 09d4721e314034782e9eae5df8ee9d01d77c1d697d051b5d8b4297a27cb2a890 (image=quay.io/ceph/ceph:v18, name=interesting_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:23:42 compute-0 ceph-mon[74318]: [21/Jan/2026:23:23:41] ENGINE Bus STARTING
Jan 21 23:23:42 compute-0 ceph-mon[74318]: [21/Jan/2026:23:23:41] ENGINE Serving on http://192.168.122.100:8765
Jan 21 23:23:42 compute-0 ceph-mon[74318]: [21/Jan/2026:23:23:41] ENGINE Serving on https://192.168.122.100:7150
Jan 21 23:23:42 compute-0 ceph-mon[74318]: [21/Jan/2026:23:23:41] ENGINE Bus STARTED
Jan 21 23:23:42 compute-0 ceph-mon[74318]: [21/Jan/2026:23:23:41] ENGINE Client ('192.168.122.100', 43724) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 23:23:42 compute-0 ceph-mon[74318]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:42 compute-0 ceph-mon[74318]: Set ssh ssh_user
Jan 21 23:23:42 compute-0 ceph-mon[74318]: Set ssh ssh_config
Jan 21 23:23:42 compute-0 ceph-mon[74318]: ssh user set to ceph-admin. sudo will be used
Jan 21 23:23:42 compute-0 ceph-mon[74318]: mgrmap e7: compute-0.boqcsl(active, since 2s)
Jan 21 23:23:42 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:43 compute-0 ceph-mgr[74614]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 23:23:43 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Jan 21 23:23:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:43 compute-0 ceph-mgr[74614]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 21 23:23:43 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 21 23:23:43 compute-0 systemd[1]: libpod-09d4721e314034782e9eae5df8ee9d01d77c1d697d051b5d8b4297a27cb2a890.scope: Deactivated successfully.
Jan 21 23:23:43 compute-0 podman[75782]: 2026-01-21 23:23:43.312893361 +0000 UTC m=+0.692740989 container died 09d4721e314034782e9eae5df8ee9d01d77c1d697d051b5d8b4297a27cb2a890 (image=quay.io/ceph/ceph:v18, name=interesting_goldwasser, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 21 23:23:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d86cc8f88bfb7798719dc2afa9770b76c3d16a0e9162e6352538321fc7f3795-merged.mount: Deactivated successfully.
Jan 21 23:23:43 compute-0 podman[75782]: 2026-01-21 23:23:43.354835384 +0000 UTC m=+0.734683012 container remove 09d4721e314034782e9eae5df8ee9d01d77c1d697d051b5d8b4297a27cb2a890 (image=quay.io/ceph/ceph:v18, name=interesting_goldwasser, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:23:43 compute-0 systemd[1]: libpod-conmon-09d4721e314034782e9eae5df8ee9d01d77c1d697d051b5d8b4297a27cb2a890.scope: Deactivated successfully.
Jan 21 23:23:43 compute-0 podman[75839]: 2026-01-21 23:23:43.435101347 +0000 UTC m=+0.055078954 container create a5489eb03ab66650f7b0bd3544c52a476c3725a04b326c181428f739611a0998 (image=quay.io/ceph/ceph:v18, name=nice_booth, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:23:43 compute-0 systemd[1]: Started libpod-conmon-a5489eb03ab66650f7b0bd3544c52a476c3725a04b326c181428f739611a0998.scope.
Jan 21 23:23:43 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd949d99b4c6d6854d1b8f828d28455b894d2355df6eb059b8b3b4cd0a68904f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd949d99b4c6d6854d1b8f828d28455b894d2355df6eb059b8b3b4cd0a68904f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd949d99b4c6d6854d1b8f828d28455b894d2355df6eb059b8b3b4cd0a68904f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:43 compute-0 podman[75839]: 2026-01-21 23:23:43.41526125 +0000 UTC m=+0.035238837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:43 compute-0 podman[75839]: 2026-01-21 23:23:43.656435394 +0000 UTC m=+0.276413001 container init a5489eb03ab66650f7b0bd3544c52a476c3725a04b326c181428f739611a0998 (image=quay.io/ceph/ceph:v18, name=nice_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 21 23:23:43 compute-0 podman[75839]: 2026-01-21 23:23:43.664936434 +0000 UTC m=+0.284914001 container start a5489eb03ab66650f7b0bd3544c52a476c3725a04b326c181428f739611a0998 (image=quay.io/ceph/ceph:v18, name=nice_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 21 23:23:43 compute-0 podman[75839]: 2026-01-21 23:23:43.768049036 +0000 UTC m=+0.388026643 container attach a5489eb03ab66650f7b0bd3544c52a476c3725a04b326c181428f739611a0998 (image=quay.io/ceph/ceph:v18, name=nice_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:23:44 compute-0 ceph-mon[74318]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:44 compute-0 ceph-mon[74318]: Set ssh ssh_identity_key
Jan 21 23:23:44 compute-0 ceph-mon[74318]: Set ssh private key
Jan 21 23:23:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:44 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:44 compute-0 nice_booth[75855]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDrcZfV7OcLp+UKpI3fa1gTtK4San5rwiwmGOePJ1SeRx/NUvrGxIRyxymB7Vq3lrf+sUtt9CwN8FTaY2j3Z6dz4Z5AJk6C7piFfxrxxXfSU0P2QEhm2c8nS5RAIHjOq7YNnlpyzuuBKdAaPTkjq6nJWvEyr8QA4nGIpAyEbJ9fgJoxGFv0kGn4LN8TFMaUAGm+NTc1rbJYDd0PzjmKz3tHErtMFXzGGyUmJQh9Rt5EKA/ikwQvIqTIRp53zNfVHrbrt79hp3k3+AbcEGkJuh+7e2e9axJ2KBHvu32p7Tk5Q9Vp35Z3LcVusJCNCWd3fO7wpW6WxNquwEMv3VTx6NExoO+jUdInuiwyCxE3BYBIOGvUjk82rpWay200gZG9BQTcnHi9A3O7Q+9k5uPOwlZKXMwwHhFggH7lzRx9+JT/lJ6nZMfT53I8fIyZF1QPfprHMHdEgDWfnmpFtZvRmylhSQvRbz+jdkrMfWde6tjsCPOe8N9Kr1nokg+mz/FgFtc= zuul@controller
Jan 21 23:23:44 compute-0 systemd[1]: libpod-a5489eb03ab66650f7b0bd3544c52a476c3725a04b326c181428f739611a0998.scope: Deactivated successfully.
Jan 21 23:23:44 compute-0 podman[75839]: 2026-01-21 23:23:44.20615421 +0000 UTC m=+0.826131817 container died a5489eb03ab66650f7b0bd3544c52a476c3725a04b326c181428f739611a0998 (image=quay.io/ceph/ceph:v18, name=nice_booth, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:23:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd949d99b4c6d6854d1b8f828d28455b894d2355df6eb059b8b3b4cd0a68904f-merged.mount: Deactivated successfully.
Jan 21 23:23:45 compute-0 ceph-mgr[74614]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 23:23:45 compute-0 ceph-mon[74318]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:45 compute-0 ceph-mon[74318]: Set ssh ssh_identity_pub
Jan 21 23:23:45 compute-0 podman[75839]: 2026-01-21 23:23:45.53185347 +0000 UTC m=+2.151831077 container remove a5489eb03ab66650f7b0bd3544c52a476c3725a04b326c181428f739611a0998 (image=quay.io/ceph/ceph:v18, name=nice_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:23:45 compute-0 systemd[1]: libpod-conmon-a5489eb03ab66650f7b0bd3544c52a476c3725a04b326c181428f739611a0998.scope: Deactivated successfully.
Jan 21 23:23:45 compute-0 podman[75893]: 2026-01-21 23:23:45.577611398 +0000 UTC m=+0.021103257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:45 compute-0 podman[75893]: 2026-01-21 23:23:45.689934122 +0000 UTC m=+0.133425911 container create 9c419520bd0f546d6735e88efaeaa3b5dc742dcaba3e4743af944ec08ecd63bf (image=quay.io/ceph/ceph:v18, name=crazy_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:23:45 compute-0 systemd[1]: Started libpod-conmon-9c419520bd0f546d6735e88efaeaa3b5dc742dcaba3e4743af944ec08ecd63bf.scope.
Jan 21 23:23:45 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7e771cee123ae687f8e1f1842e09527d71dbc98efccbd974cb1d1e631db8b0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7e771cee123ae687f8e1f1842e09527d71dbc98efccbd974cb1d1e631db8b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7e771cee123ae687f8e1f1842e09527d71dbc98efccbd974cb1d1e631db8b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:45 compute-0 podman[75893]: 2026-01-21 23:23:45.898268361 +0000 UTC m=+0.341760210 container init 9c419520bd0f546d6735e88efaeaa3b5dc742dcaba3e4743af944ec08ecd63bf (image=quay.io/ceph/ceph:v18, name=crazy_chatterjee, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 21 23:23:45 compute-0 podman[75893]: 2026-01-21 23:23:45.903082328 +0000 UTC m=+0.346574087 container start 9c419520bd0f546d6735e88efaeaa3b5dc742dcaba3e4743af944ec08ecd63bf (image=quay.io/ceph/ceph:v18, name=crazy_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 23:23:46 compute-0 podman[75893]: 2026-01-21 23:23:46.008664626 +0000 UTC m=+0.452156475 container attach 9c419520bd0f546d6735e88efaeaa3b5dc742dcaba3e4743af944ec08ecd63bf (image=quay.io/ceph/ceph:v18, name=crazy_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 21 23:23:46 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:46 compute-0 ceph-mon[74318]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:46 compute-0 sshd-session[75935]: Accepted publickey for ceph-admin from 192.168.122.100 port 35106 ssh2: RSA SHA256:kW7AbEF6E9Zse/yjN6dVjvmzoqBwUgKYFkxqB1vmEmU
Jan 21 23:23:46 compute-0 systemd-logind[786]: New session 21 of user ceph-admin.
Jan 21 23:23:46 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 21 23:23:46 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 21 23:23:46 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 21 23:23:46 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 21 23:23:46 compute-0 systemd[75939]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 23:23:46 compute-0 systemd[75939]: Queued start job for default target Main User Target.
Jan 21 23:23:46 compute-0 systemd[75939]: Created slice User Application Slice.
Jan 21 23:23:46 compute-0 systemd[75939]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 21 23:23:46 compute-0 systemd[75939]: Started Daily Cleanup of User's Temporary Directories.
Jan 21 23:23:46 compute-0 systemd[75939]: Reached target Paths.
Jan 21 23:23:46 compute-0 systemd[75939]: Reached target Timers.
Jan 21 23:23:46 compute-0 systemd[75939]: Starting D-Bus User Message Bus Socket...
Jan 21 23:23:46 compute-0 systemd[75939]: Starting Create User's Volatile Files and Directories...
Jan 21 23:23:46 compute-0 sshd-session[75952]: Accepted publickey for ceph-admin from 192.168.122.100 port 35120 ssh2: RSA SHA256:kW7AbEF6E9Zse/yjN6dVjvmzoqBwUgKYFkxqB1vmEmU
Jan 21 23:23:46 compute-0 systemd[75939]: Finished Create User's Volatile Files and Directories.
Jan 21 23:23:46 compute-0 systemd[75939]: Listening on D-Bus User Message Bus Socket.
Jan 21 23:23:46 compute-0 systemd[75939]: Reached target Sockets.
Jan 21 23:23:46 compute-0 systemd[75939]: Reached target Basic System.
Jan 21 23:23:46 compute-0 systemd[75939]: Reached target Main User Target.
Jan 21 23:23:46 compute-0 systemd[75939]: Startup finished in 134ms.
Jan 21 23:23:46 compute-0 systemd-logind[786]: New session 23 of user ceph-admin.
Jan 21 23:23:46 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 21 23:23:46 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Jan 21 23:23:46 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Jan 21 23:23:46 compute-0 sshd-session[75935]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 23:23:46 compute-0 sshd-session[75952]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 23:23:46 compute-0 sudo[75959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:46 compute-0 sudo[75959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:46 compute-0 sudo[75959]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:47 compute-0 sudo[75984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:23:47 compute-0 sudo[75984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:47 compute-0 sudo[75984]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:47 compute-0 ceph-mgr[74614]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 23:23:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052908 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:23:47 compute-0 sshd-session[76009]: Accepted publickey for ceph-admin from 192.168.122.100 port 35122 ssh2: RSA SHA256:kW7AbEF6E9Zse/yjN6dVjvmzoqBwUgKYFkxqB1vmEmU
Jan 21 23:23:47 compute-0 systemd-logind[786]: New session 24 of user ceph-admin.
Jan 21 23:23:47 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Jan 21 23:23:47 compute-0 sshd-session[76009]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 23:23:47 compute-0 ceph-mon[74318]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:47 compute-0 sudo[76013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:47 compute-0 sudo[76013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:47 compute-0 sudo[76013]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:47 compute-0 sudo[76038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Jan 21 23:23:47 compute-0 sudo[76038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:47 compute-0 sudo[76038]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:47 compute-0 sshd-session[76063]: Accepted publickey for ceph-admin from 192.168.122.100 port 35126 ssh2: RSA SHA256:kW7AbEF6E9Zse/yjN6dVjvmzoqBwUgKYFkxqB1vmEmU
Jan 21 23:23:47 compute-0 systemd-logind[786]: New session 25 of user ceph-admin.
Jan 21 23:23:47 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Jan 21 23:23:47 compute-0 sshd-session[76063]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 23:23:47 compute-0 sudo[76067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:47 compute-0 sudo[76067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:47 compute-0 sudo[76067]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:47 compute-0 sudo[76092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 21 23:23:47 compute-0 sudo[76092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:48 compute-0 sudo[76092]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:48 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 21 23:23:48 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 21 23:23:48 compute-0 sshd-session[76117]: Accepted publickey for ceph-admin from 192.168.122.100 port 35134 ssh2: RSA SHA256:kW7AbEF6E9Zse/yjN6dVjvmzoqBwUgKYFkxqB1vmEmU
Jan 21 23:23:48 compute-0 systemd-logind[786]: New session 26 of user ceph-admin.
Jan 21 23:23:48 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Jan 21 23:23:48 compute-0 sshd-session[76117]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 23:23:48 compute-0 sudo[76121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:48 compute-0 sudo[76121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:48 compute-0 sudo[76121]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:48 compute-0 sudo[76146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:23:48 compute-0 sudo[76146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:48 compute-0 sudo[76146]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:48 compute-0 ceph-mon[74318]: Deploying cephadm binary to compute-0
Jan 21 23:23:48 compute-0 sshd-session[76171]: Accepted publickey for ceph-admin from 192.168.122.100 port 35140 ssh2: RSA SHA256:kW7AbEF6E9Zse/yjN6dVjvmzoqBwUgKYFkxqB1vmEmU
Jan 21 23:23:48 compute-0 systemd-logind[786]: New session 27 of user ceph-admin.
Jan 21 23:23:48 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Jan 21 23:23:48 compute-0 sshd-session[76171]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 23:23:48 compute-0 sudo[76175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:48 compute-0 sudo[76175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:48 compute-0 sudo[76175]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:48 compute-0 sudo[76200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:23:48 compute-0 sudo[76200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:48 compute-0 sudo[76200]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:49 compute-0 sshd-session[76225]: Accepted publickey for ceph-admin from 192.168.122.100 port 35148 ssh2: RSA SHA256:kW7AbEF6E9Zse/yjN6dVjvmzoqBwUgKYFkxqB1vmEmU
Jan 21 23:23:49 compute-0 systemd-logind[786]: New session 28 of user ceph-admin.
Jan 21 23:23:49 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Jan 21 23:23:49 compute-0 sshd-session[76225]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 23:23:49 compute-0 ceph-mgr[74614]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 23:23:49 compute-0 sudo[76229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:49 compute-0 sudo[76229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:49 compute-0 sudo[76229]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:49 compute-0 sudo[76254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 21 23:23:49 compute-0 sudo[76254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:49 compute-0 sudo[76254]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:49 compute-0 sshd-session[76279]: Accepted publickey for ceph-admin from 192.168.122.100 port 35156 ssh2: RSA SHA256:kW7AbEF6E9Zse/yjN6dVjvmzoqBwUgKYFkxqB1vmEmU
Jan 21 23:23:49 compute-0 systemd-logind[786]: New session 29 of user ceph-admin.
Jan 21 23:23:49 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Jan 21 23:23:49 compute-0 sshd-session[76279]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 23:23:49 compute-0 sudo[76283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:49 compute-0 sudo[76283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:49 compute-0 sudo[76283]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:49 compute-0 sudo[76308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:23:49 compute-0 sudo[76308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:49 compute-0 sudo[76308]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:50 compute-0 sshd-session[76333]: Accepted publickey for ceph-admin from 192.168.122.100 port 35172 ssh2: RSA SHA256:kW7AbEF6E9Zse/yjN6dVjvmzoqBwUgKYFkxqB1vmEmU
Jan 21 23:23:50 compute-0 systemd-logind[786]: New session 30 of user ceph-admin.
Jan 21 23:23:50 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Jan 21 23:23:50 compute-0 sshd-session[76333]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 23:23:50 compute-0 sudo[76337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:50 compute-0 sudo[76337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:50 compute-0 sudo[76337]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:50 compute-0 sudo[76362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 21 23:23:50 compute-0 sudo[76362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:50 compute-0 sudo[76362]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:50 compute-0 sshd-session[76387]: Accepted publickey for ceph-admin from 192.168.122.100 port 35176 ssh2: RSA SHA256:kW7AbEF6E9Zse/yjN6dVjvmzoqBwUgKYFkxqB1vmEmU
Jan 21 23:23:50 compute-0 systemd-logind[786]: New session 31 of user ceph-admin.
Jan 21 23:23:50 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Jan 21 23:23:50 compute-0 sshd-session[76387]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 23:23:50 compute-0 sshd-session[76414]: Accepted publickey for ceph-admin from 192.168.122.100 port 35180 ssh2: RSA SHA256:kW7AbEF6E9Zse/yjN6dVjvmzoqBwUgKYFkxqB1vmEmU
Jan 21 23:23:50 compute-0 systemd-logind[786]: New session 32 of user ceph-admin.
Jan 21 23:23:50 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Jan 21 23:23:50 compute-0 sshd-session[76414]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 23:23:51 compute-0 sudo[76418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:51 compute-0 sudo[76418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:51 compute-0 sudo[76418]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:51 compute-0 sudo[76443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 21 23:23:51 compute-0 sudo[76443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:51 compute-0 sudo[76443]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:51 compute-0 ceph-mgr[74614]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 23:23:51 compute-0 sshd-session[76468]: Accepted publickey for ceph-admin from 192.168.122.100 port 35182 ssh2: RSA SHA256:kW7AbEF6E9Zse/yjN6dVjvmzoqBwUgKYFkxqB1vmEmU
Jan 21 23:23:51 compute-0 systemd-logind[786]: New session 33 of user ceph-admin.
Jan 21 23:23:51 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Jan 21 23:23:51 compute-0 sshd-session[76468]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 23:23:51 compute-0 sudo[76472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:51 compute-0 sudo[76472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:51 compute-0 sudo[76472]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:51 compute-0 sudo[76497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Jan 21 23:23:51 compute-0 sudo[76497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:51 compute-0 sudo[76497]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 21 23:23:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054708 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:23:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:52 compute-0 ceph-mgr[74614]: [cephadm INFO root] Added host compute-0
Jan 21 23:23:52 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 21 23:23:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 21 23:23:52 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 23:23:52 compute-0 crazy_chatterjee[75909]: Added host 'compute-0' with addr '192.168.122.100'
Jan 21 23:23:52 compute-0 systemd[1]: libpod-9c419520bd0f546d6735e88efaeaa3b5dc742dcaba3e4743af944ec08ecd63bf.scope: Deactivated successfully.
Jan 21 23:23:52 compute-0 podman[75893]: 2026-01-21 23:23:52.257776983 +0000 UTC m=+6.701268752 container died 9c419520bd0f546d6735e88efaeaa3b5dc742dcaba3e4743af944ec08ecd63bf (image=quay.io/ceph/ceph:v18, name=crazy_chatterjee, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 23:23:52 compute-0 sudo[76544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:52 compute-0 sudo[76544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:52 compute-0 sudo[76544]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:52 compute-0 sudo[76580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:23:52 compute-0 sudo[76580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:52 compute-0 sudo[76580]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:52 compute-0 sudo[76605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:52 compute-0 sudo[76605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:52 compute-0 sudo[76605]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:52 compute-0 sudo[76630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Jan 21 23:23:52 compute-0 sudo[76630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d7e771cee123ae687f8e1f1842e09527d71dbc98efccbd974cb1d1e631db8b0-merged.mount: Deactivated successfully.
Jan 21 23:23:53 compute-0 podman[75893]: 2026-01-21 23:23:53.040994518 +0000 UTC m=+7.484486307 container remove 9c419520bd0f546d6735e88efaeaa3b5dc742dcaba3e4743af944ec08ecd63bf (image=quay.io/ceph/ceph:v18, name=crazy_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:23:53 compute-0 systemd[1]: libpod-conmon-9c419520bd0f546d6735e88efaeaa3b5dc742dcaba3e4743af944ec08ecd63bf.scope: Deactivated successfully.
Jan 21 23:23:53 compute-0 ceph-mgr[74614]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 23:23:53 compute-0 podman[76669]: 2026-01-21 23:23:53.111869934 +0000 UTC m=+0.040057965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:53 compute-0 podman[76669]: 2026-01-21 23:23:53.322367619 +0000 UTC m=+0.250555610 container create d6a467c61b02f09b6322d5ffd13861c144f7b06d17afe43fa8ee535fb4966b2f (image=quay.io/ceph/ceph:v18, name=magical_lederberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 21 23:23:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:53 compute-0 ceph-mon[74318]: Added host compute-0
Jan 21 23:23:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 23:23:53 compute-0 systemd[1]: Started libpod-conmon-d6a467c61b02f09b6322d5ffd13861c144f7b06d17afe43fa8ee535fb4966b2f.scope.
Jan 21 23:23:53 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b2628ba8f699672e582f3b4bf1fbc50aa52b0c6df58e420a788302b5e5fd109/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b2628ba8f699672e582f3b4bf1fbc50aa52b0c6df58e420a788302b5e5fd109/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b2628ba8f699672e582f3b4bf1fbc50aa52b0c6df58e420a788302b5e5fd109/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:53 compute-0 podman[76669]: 2026-01-21 23:23:53.545669036 +0000 UTC m=+0.473857027 container init d6a467c61b02f09b6322d5ffd13861c144f7b06d17afe43fa8ee535fb4966b2f (image=quay.io/ceph/ceph:v18, name=magical_lederberg, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 23:23:53 compute-0 podman[76669]: 2026-01-21 23:23:53.552136704 +0000 UTC m=+0.480324725 container start d6a467c61b02f09b6322d5ffd13861c144f7b06d17afe43fa8ee535fb4966b2f (image=quay.io/ceph/ceph:v18, name=magical_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 21 23:23:53 compute-0 podman[76669]: 2026-01-21 23:23:53.734877811 +0000 UTC m=+0.663065852 container attach d6a467c61b02f09b6322d5ffd13861c144f7b06d17afe43fa8ee535fb4966b2f (image=quay.io/ceph/ceph:v18, name=magical_lederberg, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:23:53 compute-0 podman[76702]: 2026-01-21 23:23:53.83101119 +0000 UTC m=+0.035507967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:53 compute-0 podman[76702]: 2026-01-21 23:23:53.959971283 +0000 UTC m=+0.164468070 container create 3259dd6eeda0d60f9f508ab04479f3c71608bd26224706416069823dde4949ae (image=quay.io/ceph/ceph:v18, name=nifty_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:23:53 compute-0 systemd[1]: Started libpod-conmon-3259dd6eeda0d60f9f508ab04479f3c71608bd26224706416069823dde4949ae.scope.
Jan 21 23:23:54 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:54 compute-0 podman[76702]: 2026-01-21 23:23:54.020750071 +0000 UTC m=+0.225246878 container init 3259dd6eeda0d60f9f508ab04479f3c71608bd26224706416069823dde4949ae (image=quay.io/ceph/ceph:v18, name=nifty_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 23:23:54 compute-0 podman[76702]: 2026-01-21 23:23:54.026778035 +0000 UTC m=+0.231274812 container start 3259dd6eeda0d60f9f508ab04479f3c71608bd26224706416069823dde4949ae (image=quay.io/ceph/ceph:v18, name=nifty_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:23:54 compute-0 podman[76702]: 2026-01-21 23:23:54.031157539 +0000 UTC m=+0.235654346 container attach 3259dd6eeda0d60f9f508ab04479f3c71608bd26224706416069823dde4949ae (image=quay.io/ceph/ceph:v18, name=nifty_dijkstra, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 21 23:23:54 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:54 compute-0 ceph-mgr[74614]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 21 23:23:54 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 21 23:23:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 21 23:23:54 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:54 compute-0 magical_lederberg[76694]: Scheduled mon update...
Jan 21 23:23:54 compute-0 nifty_dijkstra[76738]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 21 23:23:54 compute-0 systemd[1]: libpod-3259dd6eeda0d60f9f508ab04479f3c71608bd26224706416069823dde4949ae.scope: Deactivated successfully.
Jan 21 23:23:54 compute-0 podman[76702]: 2026-01-21 23:23:54.347531221 +0000 UTC m=+0.552028018 container died 3259dd6eeda0d60f9f508ab04479f3c71608bd26224706416069823dde4949ae (image=quay.io/ceph/ceph:v18, name=nifty_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 23:23:54 compute-0 systemd[1]: libpod-d6a467c61b02f09b6322d5ffd13861c144f7b06d17afe43fa8ee535fb4966b2f.scope: Deactivated successfully.
Jan 21 23:23:54 compute-0 ceph-mon[74318]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:54 compute-0 ceph-mon[74318]: Saving service mon spec with placement count:5
Jan 21 23:23:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:54 compute-0 podman[76669]: 2026-01-21 23:23:54.43287625 +0000 UTC m=+1.361064241 container died d6a467c61b02f09b6322d5ffd13861c144f7b06d17afe43fa8ee535fb4966b2f (image=quay.io/ceph/ceph:v18, name=magical_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 21 23:23:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b2628ba8f699672e582f3b4bf1fbc50aa52b0c6df58e420a788302b5e5fd109-merged.mount: Deactivated successfully.
Jan 21 23:23:54 compute-0 podman[76669]: 2026-01-21 23:23:54.853716816 +0000 UTC m=+1.781904827 container remove d6a467c61b02f09b6322d5ffd13861c144f7b06d17afe43fa8ee535fb4966b2f (image=quay.io/ceph/ceph:v18, name=magical_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:23:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f526d1ee93001649a024ec23cbb1ede509e7cd6f50664eb74f7a353cf946fe4-merged.mount: Deactivated successfully.
Jan 21 23:23:54 compute-0 podman[76702]: 2026-01-21 23:23:54.887751926 +0000 UTC m=+1.092248683 container remove 3259dd6eeda0d60f9f508ab04479f3c71608bd26224706416069823dde4949ae (image=quay.io/ceph/ceph:v18, name=nifty_dijkstra, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:23:54 compute-0 systemd[1]: libpod-conmon-3259dd6eeda0d60f9f508ab04479f3c71608bd26224706416069823dde4949ae.scope: Deactivated successfully.
Jan 21 23:23:54 compute-0 sudo[76630]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:54 compute-0 systemd[1]: libpod-conmon-d6a467c61b02f09b6322d5ffd13861c144f7b06d17afe43fa8ee535fb4966b2f.scope: Deactivated successfully.
Jan 21 23:23:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Jan 21 23:23:54 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:54 compute-0 podman[76769]: 2026-01-21 23:23:54.938532379 +0000 UTC m=+0.060350246 container create 13642446852ae22f8be923ca4ef5219d848d920600f0f6ff01759ed5e6d90f27 (image=quay.io/ceph/ceph:v18, name=practical_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 21 23:23:54 compute-0 systemd[1]: Started libpod-conmon-13642446852ae22f8be923ca4ef5219d848d920600f0f6ff01759ed5e6d90f27.scope.
Jan 21 23:23:54 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d298a3c9824a47f8898eaf0a51f2f5ca54f4c3f987dbe8260a369b3927de948/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d298a3c9824a47f8898eaf0a51f2f5ca54f4c3f987dbe8260a369b3927de948/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d298a3c9824a47f8898eaf0a51f2f5ca54f4c3f987dbe8260a369b3927de948/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:55 compute-0 sudo[76783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:55 compute-0 sudo[76783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:55 compute-0 podman[76769]: 2026-01-21 23:23:54.918186327 +0000 UTC m=+0.040004284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:55 compute-0 sudo[76783]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:55 compute-0 podman[76769]: 2026-01-21 23:23:55.011532631 +0000 UTC m=+0.133350528 container init 13642446852ae22f8be923ca4ef5219d848d920600f0f6ff01759ed5e6d90f27 (image=quay.io/ceph/ceph:v18, name=practical_tu, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:23:55 compute-0 podman[76769]: 2026-01-21 23:23:55.020126813 +0000 UTC m=+0.141944700 container start 13642446852ae22f8be923ca4ef5219d848d920600f0f6ff01759ed5e6d90f27 (image=quay.io/ceph/ceph:v18, name=practical_tu, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 23:23:55 compute-0 podman[76769]: 2026-01-21 23:23:55.023985011 +0000 UTC m=+0.145802868 container attach 13642446852ae22f8be923ca4ef5219d848d920600f0f6ff01759ed5e6d90f27 (image=quay.io/ceph/ceph:v18, name=practical_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 23:23:55 compute-0 sudo[76814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:23:55 compute-0 sudo[76814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:55 compute-0 sudo[76814]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:55 compute-0 sudo[76840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:55 compute-0 sudo[76840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:55 compute-0 sudo[76840]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:55 compute-0 ceph-mgr[74614]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 23:23:55 compute-0 sudo[76865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 21 23:23:55 compute-0 sudo[76865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:55 compute-0 sudo[76865]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:23:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:55 compute-0 sudo[76928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:55 compute-0 sudo[76928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:55 compute-0 sudo[76928]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:55 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:55 compute-0 ceph-mgr[74614]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 21 23:23:55 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 21 23:23:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 21 23:23:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:55 compute-0 practical_tu[76803]: Scheduled mgr update...
Jan 21 23:23:55 compute-0 sudo[76953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:23:55 compute-0 sudo[76953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:55 compute-0 sudo[76953]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:55 compute-0 systemd[1]: libpod-13642446852ae22f8be923ca4ef5219d848d920600f0f6ff01759ed5e6d90f27.scope: Deactivated successfully.
Jan 21 23:23:55 compute-0 podman[76769]: 2026-01-21 23:23:55.627472951 +0000 UTC m=+0.749290828 container died 13642446852ae22f8be923ca4ef5219d848d920600f0f6ff01759ed5e6d90f27 (image=quay.io/ceph/ceph:v18, name=practical_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 21 23:23:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d298a3c9824a47f8898eaf0a51f2f5ca54f4c3f987dbe8260a369b3927de948-merged.mount: Deactivated successfully.
Jan 21 23:23:55 compute-0 podman[76769]: 2026-01-21 23:23:55.665547815 +0000 UTC m=+0.787365722 container remove 13642446852ae22f8be923ca4ef5219d848d920600f0f6ff01759ed5e6d90f27 (image=quay.io/ceph/ceph:v18, name=practical_tu, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:23:55 compute-0 systemd[1]: libpod-conmon-13642446852ae22f8be923ca4ef5219d848d920600f0f6ff01759ed5e6d90f27.scope: Deactivated successfully.
Jan 21 23:23:55 compute-0 sudo[76980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:55 compute-0 sudo[76980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:55 compute-0 sudo[76980]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:55 compute-0 sudo[77025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 21 23:23:55 compute-0 podman[77017]: 2026-01-21 23:23:55.742732475 +0000 UTC m=+0.054899240 container create 44a5aed828a2b5a3b82ddaef6c9a173e8f637f45ef903a10c9237047dda687fb (image=quay.io/ceph/ceph:v18, name=determined_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 21 23:23:55 compute-0 sudo[77025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:55 compute-0 systemd[1]: Started libpod-conmon-44a5aed828a2b5a3b82ddaef6c9a173e8f637f45ef903a10c9237047dda687fb.scope.
Jan 21 23:23:55 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/431440ba54bfb01882161d87818ff70306b5b5f70e63736db07df68ca1b610f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/431440ba54bfb01882161d87818ff70306b5b5f70e63736db07df68ca1b610f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/431440ba54bfb01882161d87818ff70306b5b5f70e63736db07df68ca1b610f2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:55 compute-0 podman[77017]: 2026-01-21 23:23:55.718293717 +0000 UTC m=+0.030460552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:55 compute-0 podman[77017]: 2026-01-21 23:23:55.817036196 +0000 UTC m=+0.129202971 container init 44a5aed828a2b5a3b82ddaef6c9a173e8f637f45ef903a10c9237047dda687fb (image=quay.io/ceph/ceph:v18, name=determined_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 23:23:55 compute-0 podman[77017]: 2026-01-21 23:23:55.824217836 +0000 UTC m=+0.136384591 container start 44a5aed828a2b5a3b82ddaef6c9a173e8f637f45ef903a10c9237047dda687fb (image=quay.io/ceph/ceph:v18, name=determined_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 21 23:23:55 compute-0 podman[77017]: 2026-01-21 23:23:55.827719423 +0000 UTC m=+0.139886208 container attach 44a5aed828a2b5a3b82ddaef6c9a173e8f637f45ef903a10c9237047dda687fb (image=quay.io/ceph/ceph:v18, name=determined_galois, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Jan 21 23:23:55 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:55 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:55 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:56 compute-0 podman[77137]: 2026-01-21 23:23:56.136181073 +0000 UTC m=+0.053185597 container exec 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:23:56 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:56 compute-0 ceph-mgr[74614]: [cephadm INFO root] Saving service crash spec with placement *
Jan 21 23:23:56 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 21 23:23:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 21 23:23:56 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:56 compute-0 determined_galois[77059]: Scheduled crash update...
Jan 21 23:23:56 compute-0 podman[77017]: 2026-01-21 23:23:56.368833066 +0000 UTC m=+0.680999851 container died 44a5aed828a2b5a3b82ddaef6c9a173e8f637f45ef903a10c9237047dda687fb (image=quay.io/ceph/ceph:v18, name=determined_galois, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 23:23:56 compute-0 systemd[1]: libpod-44a5aed828a2b5a3b82ddaef6c9a173e8f637f45ef903a10c9237047dda687fb.scope: Deactivated successfully.
Jan 21 23:23:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-431440ba54bfb01882161d87818ff70306b5b5f70e63736db07df68ca1b610f2-merged.mount: Deactivated successfully.
Jan 21 23:23:56 compute-0 podman[77017]: 2026-01-21 23:23:56.425197069 +0000 UTC m=+0.737363854 container remove 44a5aed828a2b5a3b82ddaef6c9a173e8f637f45ef903a10c9237047dda687fb (image=quay.io/ceph/ceph:v18, name=determined_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 21 23:23:56 compute-0 systemd[1]: libpod-conmon-44a5aed828a2b5a3b82ddaef6c9a173e8f637f45ef903a10c9237047dda687fb.scope: Deactivated successfully.
Jan 21 23:23:56 compute-0 podman[77190]: 2026-01-21 23:23:56.472717942 +0000 UTC m=+0.051542107 container exec_died 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Jan 21 23:23:56 compute-0 podman[77137]: 2026-01-21 23:23:56.477426495 +0000 UTC m=+0.394431019 container exec_died 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:23:56 compute-0 podman[77200]: 2026-01-21 23:23:56.488625068 +0000 UTC m=+0.043577533 container create 25b6a87b5c565d16fac576677a3ed0d2f172a5d1a5c43ef9703577d3aad85294 (image=quay.io/ceph/ceph:v18, name=dreamy_haibt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 21 23:23:56 compute-0 systemd[1]: Started libpod-conmon-25b6a87b5c565d16fac576677a3ed0d2f172a5d1a5c43ef9703577d3aad85294.scope.
Jan 21 23:23:56 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65ede5defb783d46978425830aa06589b7c6018d309fa8fe32e4ccb883e20c30/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65ede5defb783d46978425830aa06589b7c6018d309fa8fe32e4ccb883e20c30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65ede5defb783d46978425830aa06589b7c6018d309fa8fe32e4ccb883e20c30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:56 compute-0 podman[77200]: 2026-01-21 23:23:56.467356398 +0000 UTC m=+0.022308953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:56 compute-0 podman[77200]: 2026-01-21 23:23:56.581303941 +0000 UTC m=+0.136256446 container init 25b6a87b5c565d16fac576677a3ed0d2f172a5d1a5c43ef9703577d3aad85294 (image=quay.io/ceph/ceph:v18, name=dreamy_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:23:56 compute-0 podman[77200]: 2026-01-21 23:23:56.592516104 +0000 UTC m=+0.147468569 container start 25b6a87b5c565d16fac576677a3ed0d2f172a5d1a5c43ef9703577d3aad85294 (image=quay.io/ceph/ceph:v18, name=dreamy_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 21 23:23:56 compute-0 podman[77200]: 2026-01-21 23:23:56.595737373 +0000 UTC m=+0.150689838 container attach 25b6a87b5c565d16fac576677a3ed0d2f172a5d1a5c43ef9703577d3aad85294 (image=quay.io/ceph/ceph:v18, name=dreamy_haibt, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:23:56 compute-0 sudo[77025]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:23:56 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:56 compute-0 sudo[77243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:56 compute-0 sudo[77243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:56 compute-0 sudo[77243]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:56 compute-0 sudo[77268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:23:56 compute-0 sudo[77268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:56 compute-0 sudo[77268]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:56 compute-0 sudo[77293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:56 compute-0 sudo[77293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:56 compute-0 sudo[77293]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:56 compute-0 ceph-mon[74318]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:56 compute-0 ceph-mon[74318]: Saving service mgr spec with placement count:2
Jan 21 23:23:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:56 compute-0 sudo[77319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:23:56 compute-0 sudo[77319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:57 compute-0 ceph-mgr[74614]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 23:23:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:23:57 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77374 (sysctl)
Jan 21 23:23:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Jan 21 23:23:57 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3219505875' entity='client.admin' 
Jan 21 23:23:57 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 21 23:23:57 compute-0 systemd[1]: libpod-25b6a87b5c565d16fac576677a3ed0d2f172a5d1a5c43ef9703577d3aad85294.scope: Deactivated successfully.
Jan 21 23:23:57 compute-0 podman[77200]: 2026-01-21 23:23:57.19584694 +0000 UTC m=+0.750799405 container died 25b6a87b5c565d16fac576677a3ed0d2f172a5d1a5c43ef9703577d3aad85294 (image=quay.io/ceph/ceph:v18, name=dreamy_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 23:23:57 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 21 23:23:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-65ede5defb783d46978425830aa06589b7c6018d309fa8fe32e4ccb883e20c30-merged.mount: Deactivated successfully.
Jan 21 23:23:57 compute-0 podman[77200]: 2026-01-21 23:23:57.238847393 +0000 UTC m=+0.793799868 container remove 25b6a87b5c565d16fac576677a3ed0d2f172a5d1a5c43ef9703577d3aad85294 (image=quay.io/ceph/ceph:v18, name=dreamy_haibt, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:23:57 compute-0 systemd[1]: libpod-conmon-25b6a87b5c565d16fac576677a3ed0d2f172a5d1a5c43ef9703577d3aad85294.scope: Deactivated successfully.
Jan 21 23:23:57 compute-0 podman[77394]: 2026-01-21 23:23:57.295941259 +0000 UTC m=+0.039243890 container create f2cb85bdd9754ea3b3e33e52f0632ece65f1df4936974ce12e479c99402b525a (image=quay.io/ceph/ceph:v18, name=zen_kapitsa, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 21 23:23:57 compute-0 systemd[1]: Started libpod-conmon-f2cb85bdd9754ea3b3e33e52f0632ece65f1df4936974ce12e479c99402b525a.scope.
Jan 21 23:23:57 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:57 compute-0 podman[77394]: 2026-01-21 23:23:57.278981561 +0000 UTC m=+0.022284192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a02507017664df0def5f9a400ab9f2e565e4ae3d2c940a679724a3aa35433c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a02507017664df0def5f9a400ab9f2e565e4ae3d2c940a679724a3aa35433c2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a02507017664df0def5f9a400ab9f2e565e4ae3d2c940a679724a3aa35433c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:57 compute-0 podman[77394]: 2026-01-21 23:23:57.39083633 +0000 UTC m=+0.134139021 container init f2cb85bdd9754ea3b3e33e52f0632ece65f1df4936974ce12e479c99402b525a (image=quay.io/ceph/ceph:v18, name=zen_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 23:23:57 compute-0 podman[77394]: 2026-01-21 23:23:57.398015309 +0000 UTC m=+0.141317920 container start f2cb85bdd9754ea3b3e33e52f0632ece65f1df4936974ce12e479c99402b525a (image=quay.io/ceph/ceph:v18, name=zen_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:23:57 compute-0 podman[77394]: 2026-01-21 23:23:57.401400773 +0000 UTC m=+0.144703424 container attach f2cb85bdd9754ea3b3e33e52f0632ece65f1df4936974ce12e479c99402b525a (image=quay.io/ceph/ceph:v18, name=zen_kapitsa, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:23:57 compute-0 sudo[77319]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:57 compute-0 sudo[77433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:57 compute-0 sudo[77433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:57 compute-0 sudo[77433]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:57 compute-0 sudo[77458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:23:57 compute-0 sudo[77458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:57 compute-0 sudo[77458]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:57 compute-0 sudo[77502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:57 compute-0 sudo[77502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:57 compute-0 sudo[77502]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:57 compute-0 sudo[77527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 21 23:23:57 compute-0 sudo[77527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:57 compute-0 ceph-mon[74318]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:57 compute-0 ceph-mon[74318]: Saving service crash spec with placement *
Jan 21 23:23:57 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3219505875' entity='client.admin' 
Jan 21 23:23:57 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Jan 21 23:23:57 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:57 compute-0 systemd[1]: libpod-f2cb85bdd9754ea3b3e33e52f0632ece65f1df4936974ce12e479c99402b525a.scope: Deactivated successfully.
Jan 21 23:23:57 compute-0 podman[77394]: 2026-01-21 23:23:57.978429364 +0000 UTC m=+0.721732015 container died f2cb85bdd9754ea3b3e33e52f0632ece65f1df4936974ce12e479c99402b525a (image=quay.io/ceph/ceph:v18, name=zen_kapitsa, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 23:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a02507017664df0def5f9a400ab9f2e565e4ae3d2c940a679724a3aa35433c2-merged.mount: Deactivated successfully.
Jan 21 23:23:58 compute-0 podman[77394]: 2026-01-21 23:23:58.0368499 +0000 UTC m=+0.780152521 container remove f2cb85bdd9754ea3b3e33e52f0632ece65f1df4936974ce12e479c99402b525a (image=quay.io/ceph/ceph:v18, name=zen_kapitsa, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:23:58 compute-0 systemd[1]: libpod-conmon-f2cb85bdd9754ea3b3e33e52f0632ece65f1df4936974ce12e479c99402b525a.scope: Deactivated successfully.
Jan 21 23:23:58 compute-0 podman[77567]: 2026-01-21 23:23:58.110265995 +0000 UTC m=+0.050106073 container create 782b446c8642868aab7d9938a70b76518de3c8f880938f6b4d41bb6ca6d9e52e (image=quay.io/ceph/ceph:v18, name=kind_lamport, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:23:58 compute-0 systemd[1]: Started libpod-conmon-782b446c8642868aab7d9938a70b76518de3c8f880938f6b4d41bb6ca6d9e52e.scope.
Jan 21 23:23:58 compute-0 sudo[77527]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:23:58 compute-0 podman[77567]: 2026-01-21 23:23:58.084737784 +0000 UTC m=+0.024577952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:58 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bdec7f9d0463ba40fcabab148d5e82ed586718965dabcf117ddeb0ed1a7266/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:58 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bdec7f9d0463ba40fcabab148d5e82ed586718965dabcf117ddeb0ed1a7266/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bdec7f9d0463ba40fcabab148d5e82ed586718965dabcf117ddeb0ed1a7266/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:58 compute-0 podman[77567]: 2026-01-21 23:23:58.199789972 +0000 UTC m=+0.139630100 container init 782b446c8642868aab7d9938a70b76518de3c8f880938f6b4d41bb6ca6d9e52e (image=quay.io/ceph/ceph:v18, name=kind_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:23:58 compute-0 podman[77567]: 2026-01-21 23:23:58.207441825 +0000 UTC m=+0.147281923 container start 782b446c8642868aab7d9938a70b76518de3c8f880938f6b4d41bb6ca6d9e52e (image=quay.io/ceph/ceph:v18, name=kind_lamport, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:23:58 compute-0 podman[77567]: 2026-01-21 23:23:58.211199641 +0000 UTC m=+0.151039749 container attach 782b446c8642868aab7d9938a70b76518de3c8f880938f6b4d41bb6ca6d9e52e (image=quay.io/ceph/ceph:v18, name=kind_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:23:58 compute-0 sudo[77603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:58 compute-0 sudo[77603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:58 compute-0 sudo[77603]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:58 compute-0 sudo[77630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:23:58 compute-0 sudo[77630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:58 compute-0 sudo[77630]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:58 compute-0 sudo[77655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:23:58 compute-0 sudo[77655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:58 compute-0 sudo[77655]: pam_unix(sudo:session): session closed for user root
Jan 21 23:23:58 compute-0 sudo[77680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- inventory --format=json-pretty --filter-for-batch
Jan 21 23:23:58 compute-0 sudo[77680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:23:58 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 21 23:23:58 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:58 compute-0 ceph-mgr[74614]: [cephadm INFO root] Added label _admin to host compute-0
Jan 21 23:23:58 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 21 23:23:58 compute-0 kind_lamport[77600]: Added label _admin to host compute-0
Jan 21 23:23:58 compute-0 systemd[1]: libpod-782b446c8642868aab7d9938a70b76518de3c8f880938f6b4d41bb6ca6d9e52e.scope: Deactivated successfully.
Jan 21 23:23:58 compute-0 podman[77567]: 2026-01-21 23:23:58.790337706 +0000 UTC m=+0.730177834 container died 782b446c8642868aab7d9938a70b76518de3c8f880938f6b4d41bb6ca6d9e52e (image=quay.io/ceph/ceph:v18, name=kind_lamport, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 23:23:58 compute-0 podman[77764]: 2026-01-21 23:23:58.814864586 +0000 UTC m=+0.041102878 container create 84514ae31325fd582b21e9aa6b0a6d9f929f51c16eaebab825fbd3eb3de46561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-64bdec7f9d0463ba40fcabab148d5e82ed586718965dabcf117ddeb0ed1a7266-merged.mount: Deactivated successfully.
Jan 21 23:23:58 compute-0 podman[77567]: 2026-01-21 23:23:58.840791918 +0000 UTC m=+0.780632006 container remove 782b446c8642868aab7d9938a70b76518de3c8f880938f6b4d41bb6ca6d9e52e (image=quay.io/ceph/ceph:v18, name=kind_lamport, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:23:58 compute-0 systemd[1]: Started libpod-conmon-84514ae31325fd582b21e9aa6b0a6d9f929f51c16eaebab825fbd3eb3de46561.scope.
Jan 21 23:23:58 compute-0 systemd[1]: libpod-conmon-782b446c8642868aab7d9938a70b76518de3c8f880938f6b4d41bb6ca6d9e52e.scope: Deactivated successfully.
Jan 21 23:23:58 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:58 compute-0 podman[77764]: 2026-01-21 23:23:58.795367229 +0000 UTC m=+0.021605561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:23:58 compute-0 podman[77764]: 2026-01-21 23:23:58.89252344 +0000 UTC m=+0.118761762 container init 84514ae31325fd582b21e9aa6b0a6d9f929f51c16eaebab825fbd3eb3de46561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:23:58 compute-0 podman[77764]: 2026-01-21 23:23:58.897279526 +0000 UTC m=+0.123517818 container start 84514ae31325fd582b21e9aa6b0a6d9f929f51c16eaebab825fbd3eb3de46561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 23:23:58 compute-0 podman[77795]: 2026-01-21 23:23:58.898254346 +0000 UTC m=+0.037242300 container create be9fc8a214851268adab3e8b293906d74fffe05b7e4403fa5813061637979a15 (image=quay.io/ceph/ceph:v18, name=thirsty_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 23:23:58 compute-0 dreamy_black[77802]: 167 167
Jan 21 23:23:58 compute-0 podman[77764]: 2026-01-21 23:23:58.901404591 +0000 UTC m=+0.127642883 container attach 84514ae31325fd582b21e9aa6b0a6d9f929f51c16eaebab825fbd3eb3de46561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:23:58 compute-0 systemd[1]: libpod-84514ae31325fd582b21e9aa6b0a6d9f929f51c16eaebab825fbd3eb3de46561.scope: Deactivated successfully.
Jan 21 23:23:58 compute-0 podman[77764]: 2026-01-21 23:23:58.902272578 +0000 UTC m=+0.128510940 container died 84514ae31325fd582b21e9aa6b0a6d9f929f51c16eaebab825fbd3eb3de46561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 23:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5425a3d00549dc52d0572315cb653f1a027076658ff65725660781736a84b9b1-merged.mount: Deactivated successfully.
Jan 21 23:23:58 compute-0 podman[77764]: 2026-01-21 23:23:58.94747001 +0000 UTC m=+0.173708302 container remove 84514ae31325fd582b21e9aa6b0a6d9f929f51c16eaebab825fbd3eb3de46561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 21 23:23:58 compute-0 systemd[1]: Started libpod-conmon-be9fc8a214851268adab3e8b293906d74fffe05b7e4403fa5813061637979a15.scope.
Jan 21 23:23:58 compute-0 systemd[1]: libpod-conmon-84514ae31325fd582b21e9aa6b0a6d9f929f51c16eaebab825fbd3eb3de46561.scope: Deactivated successfully.
Jan 21 23:23:58 compute-0 ceph-mon[74318]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:58 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:58 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:58 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:23:58 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e98834eb7ff0c22a367cf7583bfa38bfc2bc6ffda62cd7d4529f9b229ad4ce1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e98834eb7ff0c22a367cf7583bfa38bfc2bc6ffda62cd7d4529f9b229ad4ce1a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e98834eb7ff0c22a367cf7583bfa38bfc2bc6ffda62cd7d4529f9b229ad4ce1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:58 compute-0 podman[77795]: 2026-01-21 23:23:58.882576626 +0000 UTC m=+0.021564570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:58 compute-0 podman[77795]: 2026-01-21 23:23:58.995054415 +0000 UTC m=+0.134042359 container init be9fc8a214851268adab3e8b293906d74fffe05b7e4403fa5813061637979a15 (image=quay.io/ceph/ceph:v18, name=thirsty_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:23:59 compute-0 podman[77795]: 2026-01-21 23:23:59.00506285 +0000 UTC m=+0.144050804 container start be9fc8a214851268adab3e8b293906d74fffe05b7e4403fa5813061637979a15 (image=quay.io/ceph/ceph:v18, name=thirsty_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:23:59 compute-0 podman[77795]: 2026-01-21 23:23:59.008060272 +0000 UTC m=+0.147048236 container attach be9fc8a214851268adab3e8b293906d74fffe05b7e4403fa5813061637979a15 (image=quay.io/ceph/ceph:v18, name=thirsty_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 21 23:23:59 compute-0 ceph-mgr[74614]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 21 23:23:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:23:59 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 21 23:23:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Jan 21 23:23:59 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1112208889' entity='client.admin' 
Jan 21 23:23:59 compute-0 systemd[1]: libpod-be9fc8a214851268adab3e8b293906d74fffe05b7e4403fa5813061637979a15.scope: Deactivated successfully.
Jan 21 23:23:59 compute-0 podman[77795]: 2026-01-21 23:23:59.528299877 +0000 UTC m=+0.667287871 container died be9fc8a214851268adab3e8b293906d74fffe05b7e4403fa5813061637979a15 (image=quay.io/ceph/ceph:v18, name=thirsty_cartwright, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 21 23:23:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-e98834eb7ff0c22a367cf7583bfa38bfc2bc6ffda62cd7d4529f9b229ad4ce1a-merged.mount: Deactivated successfully.
Jan 21 23:23:59 compute-0 podman[77795]: 2026-01-21 23:23:59.585864247 +0000 UTC m=+0.724852221 container remove be9fc8a214851268adab3e8b293906d74fffe05b7e4403fa5813061637979a15 (image=quay.io/ceph/ceph:v18, name=thirsty_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:23:59 compute-0 systemd[1]: libpod-conmon-be9fc8a214851268adab3e8b293906d74fffe05b7e4403fa5813061637979a15.scope: Deactivated successfully.
Jan 21 23:23:59 compute-0 podman[77867]: 2026-01-21 23:23:59.65367923 +0000 UTC m=+0.051411023 container create fc42dcb05ffc64c3a4744f7e3a582db492caf94b19a3900a1ec2dcfba923897a (image=quay.io/ceph/ceph:v18, name=hungry_meitner, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 23:23:59 compute-0 systemd[1]: Started libpod-conmon-fc42dcb05ffc64c3a4744f7e3a582db492caf94b19a3900a1ec2dcfba923897a.scope.
Jan 21 23:23:59 compute-0 podman[77867]: 2026-01-21 23:23:59.625349094 +0000 UTC m=+0.023080937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:23:59 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:23:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0f4f122b6530f64d9aba1d7981463aa68193a9af59fc61514af276b8f5eed5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0f4f122b6530f64d9aba1d7981463aa68193a9af59fc61514af276b8f5eed5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0f4f122b6530f64d9aba1d7981463aa68193a9af59fc61514af276b8f5eed5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:23:59 compute-0 podman[77867]: 2026-01-21 23:23:59.747027814 +0000 UTC m=+0.144759667 container init fc42dcb05ffc64c3a4744f7e3a582db492caf94b19a3900a1ec2dcfba923897a (image=quay.io/ceph/ceph:v18, name=hungry_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 23:23:59 compute-0 podman[77867]: 2026-01-21 23:23:59.757775863 +0000 UTC m=+0.155507626 container start fc42dcb05ffc64c3a4744f7e3a582db492caf94b19a3900a1ec2dcfba923897a (image=quay.io/ceph/ceph:v18, name=hungry_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 21 23:23:59 compute-0 podman[77867]: 2026-01-21 23:23:59.761439334 +0000 UTC m=+0.159171167 container attach fc42dcb05ffc64c3a4744f7e3a582db492caf94b19a3900a1ec2dcfba923897a (image=quay.io/ceph/ceph:v18, name=hungry_meitner, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 21 23:23:59 compute-0 ceph-mon[74318]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:23:59 compute-0 ceph-mon[74318]: Added label _admin to host compute-0
Jan 21 23:23:59 compute-0 ceph-mon[74318]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:23:59 compute-0 ceph-mon[74318]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 21 23:23:59 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1112208889' entity='client.admin' 
Jan 21 23:24:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Jan 21 23:24:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1226718173' entity='client.admin' 
Jan 21 23:24:00 compute-0 hungry_meitner[77883]: set mgr/dashboard/cluster/status
Jan 21 23:24:00 compute-0 systemd[1]: libpod-fc42dcb05ffc64c3a4744f7e3a582db492caf94b19a3900a1ec2dcfba923897a.scope: Deactivated successfully.
Jan 21 23:24:00 compute-0 podman[77867]: 2026-01-21 23:24:00.390123385 +0000 UTC m=+0.787855148 container died fc42dcb05ffc64c3a4744f7e3a582db492caf94b19a3900a1ec2dcfba923897a (image=quay.io/ceph/ceph:v18, name=hungry_meitner, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 21 23:24:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b0f4f122b6530f64d9aba1d7981463aa68193a9af59fc61514af276b8f5eed5-merged.mount: Deactivated successfully.
Jan 21 23:24:00 compute-0 podman[77867]: 2026-01-21 23:24:00.434329726 +0000 UTC m=+0.832061489 container remove fc42dcb05ffc64c3a4744f7e3a582db492caf94b19a3900a1ec2dcfba923897a (image=quay.io/ceph/ceph:v18, name=hungry_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 21 23:24:00 compute-0 systemd[1]: libpod-conmon-fc42dcb05ffc64c3a4744f7e3a582db492caf94b19a3900a1ec2dcfba923897a.scope: Deactivated successfully.
Jan 21 23:24:00 compute-0 sudo[73300]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:00 compute-0 podman[77928]: 2026-01-21 23:24:00.659405968 +0000 UTC m=+0.053994882 container create e3c4171d8d935db5b75f45468c295735ced7c4fbe93444534d8e7d4eb935c8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_curran, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 23:24:00 compute-0 systemd[1]: Started libpod-conmon-e3c4171d8d935db5b75f45468c295735ced7c4fbe93444534d8e7d4eb935c8b4.scope.
Jan 21 23:24:00 compute-0 podman[77928]: 2026-01-21 23:24:00.633606399 +0000 UTC m=+0.028195323 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:24:00 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:24:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b145b32f2168d46b59f37532ba281310552ae60e5547e844a30a5ac48dddec62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b145b32f2168d46b59f37532ba281310552ae60e5547e844a30a5ac48dddec62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b145b32f2168d46b59f37532ba281310552ae60e5547e844a30a5ac48dddec62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b145b32f2168d46b59f37532ba281310552ae60e5547e844a30a5ac48dddec62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:00 compute-0 podman[77928]: 2026-01-21 23:24:00.75239848 +0000 UTC m=+0.146987444 container init e3c4171d8d935db5b75f45468c295735ced7c4fbe93444534d8e7d4eb935c8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_curran, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:24:00 compute-0 podman[77928]: 2026-01-21 23:24:00.763639164 +0000 UTC m=+0.158228078 container start e3c4171d8d935db5b75f45468c295735ced7c4fbe93444534d8e7d4eb935c8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_curran, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:24:00 compute-0 podman[77928]: 2026-01-21 23:24:00.767497632 +0000 UTC m=+0.162086596 container attach e3c4171d8d935db5b75f45468c295735ced7c4fbe93444534d8e7d4eb935c8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_curran, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:24:00 compute-0 sudo[77973]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbsmddpzhvbjdwhpfvnfikypfiaunxlj ; /usr/bin/python3'
Jan 21 23:24:00 compute-0 sudo[77973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:24:01 compute-0 python3[77975]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:24:01 compute-0 podman[77976]: 2026-01-21 23:24:01.084782683 +0000 UTC m=+0.050961130 container create 24cca210d9c16041c5cc4e8ba692d9f8c22bdfbcb57578804f250ce003a2b118 (image=quay.io/ceph/ceph:v18, name=ecstatic_sinoussi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:24:01 compute-0 systemd[1]: Started libpod-conmon-24cca210d9c16041c5cc4e8ba692d9f8c22bdfbcb57578804f250ce003a2b118.scope.
Jan 21 23:24:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:01 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbae66442102bffda5416f0a6b89c10d0adbb17ed9b81a731303b39fbfd3def7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbae66442102bffda5416f0a6b89c10d0adbb17ed9b81a731303b39fbfd3def7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:01 compute-0 podman[77976]: 2026-01-21 23:24:01.062837971 +0000 UTC m=+0.029016518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:24:01 compute-0 podman[77976]: 2026-01-21 23:24:01.166341025 +0000 UTC m=+0.132519512 container init 24cca210d9c16041c5cc4e8ba692d9f8c22bdfbcb57578804f250ce003a2b118 (image=quay.io/ceph/ceph:v18, name=ecstatic_sinoussi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:24:01 compute-0 podman[77976]: 2026-01-21 23:24:01.176718203 +0000 UTC m=+0.142896670 container start 24cca210d9c16041c5cc4e8ba692d9f8c22bdfbcb57578804f250ce003a2b118 (image=quay.io/ceph/ceph:v18, name=ecstatic_sinoussi, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:24:01 compute-0 podman[77976]: 2026-01-21 23:24:01.180187389 +0000 UTC m=+0.146365876 container attach 24cca210d9c16041c5cc4e8ba692d9f8c22bdfbcb57578804f250ce003a2b118 (image=quay.io/ceph/ceph:v18, name=ecstatic_sinoussi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 21 23:24:01 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1226718173' entity='client.admin' 
Jan 21 23:24:01 compute-0 ceph-mon[74318]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:01 compute-0 anacron[30932]: Job `cron.daily' started
Jan 21 23:24:01 compute-0 anacron[30932]: Job `cron.daily' terminated
Jan 21 23:24:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Jan 21 23:24:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2980156886' entity='client.admin' 
Jan 21 23:24:01 compute-0 systemd[1]: libpod-24cca210d9c16041c5cc4e8ba692d9f8c22bdfbcb57578804f250ce003a2b118.scope: Deactivated successfully.
Jan 21 23:24:01 compute-0 podman[77976]: 2026-01-21 23:24:01.736308331 +0000 UTC m=+0.702486838 container died 24cca210d9c16041c5cc4e8ba692d9f8c22bdfbcb57578804f250ce003a2b118 (image=quay.io/ceph/ceph:v18, name=ecstatic_sinoussi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 23:24:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbae66442102bffda5416f0a6b89c10d0adbb17ed9b81a731303b39fbfd3def7-merged.mount: Deactivated successfully.
Jan 21 23:24:01 compute-0 podman[77976]: 2026-01-21 23:24:01.792091556 +0000 UTC m=+0.758270013 container remove 24cca210d9c16041c5cc4e8ba692d9f8c22bdfbcb57578804f250ce003a2b118 (image=quay.io/ceph/ceph:v18, name=ecstatic_sinoussi, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:24:01 compute-0 systemd[1]: libpod-conmon-24cca210d9c16041c5cc4e8ba692d9f8c22bdfbcb57578804f250ce003a2b118.scope: Deactivated successfully.
Jan 21 23:24:01 compute-0 sudo[77973]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:01 compute-0 vibrant_curran[77945]: [
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:     {
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:         "available": false,
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:         "ceph_device": false,
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:         "lsm_data": {},
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:         "lvs": [],
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:         "path": "/dev/sr0",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:         "rejected_reasons": [
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "Has a FileSystem",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "Insufficient space (<5GB)"
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:         ],
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:         "sys_api": {
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "actuators": null,
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "device_nodes": "sr0",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "devname": "sr0",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "human_readable_size": "482.00 KB",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "id_bus": "ata",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "model": "QEMU DVD-ROM",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "nr_requests": "2",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "parent": "/dev/sr0",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "partitions": {},
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "path": "/dev/sr0",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "removable": "1",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "rev": "2.5+",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "ro": "0",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "rotational": "1",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "sas_address": "",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "sas_device_handle": "",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "scheduler_mode": "mq-deadline",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "sectors": 0,
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "sectorsize": "2048",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "size": 493568.0,
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "support_discard": "2048",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "type": "disk",
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:             "vendor": "QEMU"
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:         }
Jan 21 23:24:01 compute-0 vibrant_curran[77945]:     }
Jan 21 23:24:01 compute-0 vibrant_curran[77945]: ]
Jan 21 23:24:01 compute-0 systemd[1]: libpod-e3c4171d8d935db5b75f45468c295735ced7c4fbe93444534d8e7d4eb935c8b4.scope: Deactivated successfully.
Jan 21 23:24:01 compute-0 podman[77928]: 2026-01-21 23:24:01.916370456 +0000 UTC m=+1.310959330 container died e3c4171d8d935db5b75f45468c295735ced7c4fbe93444534d8e7d4eb935c8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_curran, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:24:01 compute-0 systemd[1]: libpod-e3c4171d8d935db5b75f45468c295735ced7c4fbe93444534d8e7d4eb935c8b4.scope: Consumed 1.141s CPU time.
Jan 21 23:24:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-b145b32f2168d46b59f37532ba281310552ae60e5547e844a30a5ac48dddec62-merged.mount: Deactivated successfully.
Jan 21 23:24:01 compute-0 podman[77928]: 2026-01-21 23:24:01.960071641 +0000 UTC m=+1.354660515 container remove e3c4171d8d935db5b75f45468c295735ced7c4fbe93444534d8e7d4eb935c8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_curran, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 21 23:24:01 compute-0 systemd[1]: libpod-conmon-e3c4171d8d935db5b75f45468c295735ced7c4fbe93444534d8e7d4eb935c8b4.scope: Deactivated successfully.
Jan 21 23:24:01 compute-0 sudo[77680]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:24:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:24:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:24:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:24:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 21 23:24:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 21 23:24:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:24:02 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:24:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:24:02 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 21 23:24:02 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 21 23:24:02 compute-0 sudo[78972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:02 compute-0 sudo[78972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:02 compute-0 sudo[78972]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:24:02 compute-0 sudo[78997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 21 23:24:02 compute-0 sudo[78997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:02 compute-0 sudo[78997]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:02 compute-0 sudo[79045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:02 compute-0 sudo[79045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:02 compute-0 sudo[79045]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:02 compute-0 sudo[79099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph
Jan 21 23:24:02 compute-0 sudo[79099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:02 compute-0 sudo[79099]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:02 compute-0 sudo[79147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:02 compute-0 sudo[79147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:02 compute-0 sudo[79147]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:02 compute-0 sudo[79172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph/ceph.conf.new
Jan 21 23:24:02 compute-0 sudo[79172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:02 compute-0 sudo[79172]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:02 compute-0 sudo[79201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:02 compute-0 sudo[79201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:02 compute-0 sudo[79201]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:02 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2980156886' entity='client.admin' 
Jan 21 23:24:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 21 23:24:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:24:02 compute-0 ceph-mon[74318]: Updating compute-0:/etc/ceph/ceph.conf
Jan 21 23:24:02 compute-0 sudo[79250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:24:02 compute-0 sudo[79250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:02 compute-0 sudo[79250]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:02 compute-0 sudo[79332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsjhohvnjxhxucgbneqoegcikameupbe ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769037842.248057-37259-249987863730180/async_wrapper.py j457999791183 30 /home/zuul/.ansible/tmp/ansible-tmp-1769037842.248057-37259-249987863730180/AnsiballZ_command.py _'
Jan 21 23:24:02 compute-0 sudo[79332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:24:02 compute-0 sudo[79304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:02 compute-0 sudo[79304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:02 compute-0 sudo[79304]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:02 compute-0 sudo[79347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph/ceph.conf.new
Jan 21 23:24:02 compute-0 sudo[79347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:02 compute-0 sudo[79347]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:02 compute-0 ansible-async_wrapper.py[79344]: Invoked with j457999791183 30 /home/zuul/.ansible/tmp/ansible-tmp-1769037842.248057-37259-249987863730180/AnsiballZ_command.py _
Jan 21 23:24:02 compute-0 ansible-async_wrapper.py[79397]: Starting module and watcher
Jan 21 23:24:02 compute-0 ansible-async_wrapper.py[79397]: Start watching 79398 (30)
Jan 21 23:24:02 compute-0 ansible-async_wrapper.py[79398]: Start module (79398)
Jan 21 23:24:02 compute-0 ansible-async_wrapper.py[79344]: Return async_wrapper task started.
Jan 21 23:24:02 compute-0 sudo[79332]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:03 compute-0 sudo[79400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:03 compute-0 sudo[79400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:03 compute-0 sudo[79400]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:03 compute-0 sudo[79425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph/ceph.conf.new
Jan 21 23:24:03 compute-0 sudo[79425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:03 compute-0 sudo[79425]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:03 compute-0 python3[79399]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:24:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:03 compute-0 sudo[79450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:03 compute-0 sudo[79450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:03 compute-0 sudo[79450]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:03 compute-0 podman[79451]: 2026-01-21 23:24:03.222839157 +0000 UTC m=+0.078591024 container create ad14156fc7320624a8518d8b051c2d96f43f854acf0c57fbb64bc25cce303c9f (image=quay.io/ceph/ceph:v18, name=adoring_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:24:03 compute-0 systemd[1]: Started libpod-conmon-ad14156fc7320624a8518d8b051c2d96f43f854acf0c57fbb64bc25cce303c9f.scope.
Jan 21 23:24:03 compute-0 podman[79451]: 2026-01-21 23:24:03.177339386 +0000 UTC m=+0.033091253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:24:03 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:24:03 compute-0 sudo[79488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph/ceph.conf.new
Jan 21 23:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004b56516ffb5a203491d74869c6bc89561eae34dd42ac9e28a26f8612181461/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:03 compute-0 sudo[79488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004b56516ffb5a203491d74869c6bc89561eae34dd42ac9e28a26f8612181461/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:03 compute-0 sudo[79488]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:03 compute-0 podman[79451]: 2026-01-21 23:24:03.340304628 +0000 UTC m=+0.196056475 container init ad14156fc7320624a8518d8b051c2d96f43f854acf0c57fbb64bc25cce303c9f (image=quay.io/ceph/ceph:v18, name=adoring_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:24:03 compute-0 podman[79451]: 2026-01-21 23:24:03.350317184 +0000 UTC m=+0.206069011 container start ad14156fc7320624a8518d8b051c2d96f43f854acf0c57fbb64bc25cce303c9f (image=quay.io/ceph/ceph:v18, name=adoring_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 21 23:24:03 compute-0 podman[79451]: 2026-01-21 23:24:03.378928839 +0000 UTC m=+0.234680666 container attach ad14156fc7320624a8518d8b051c2d96f43f854acf0c57fbb64bc25cce303c9f (image=quay.io/ceph/ceph:v18, name=adoring_jemison, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 21 23:24:03 compute-0 sudo[79518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:03 compute-0 sudo[79518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:03 compute-0 sudo[79518]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:03 compute-0 sudo[79544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 21 23:24:03 compute-0 sudo[79544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:03 compute-0 sudo[79544]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:03 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:24:03 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:24:03 compute-0 sudo[79569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:03 compute-0 sudo[79569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:03 compute-0 sudo[79569]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:03 compute-0 sudo[79594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config
Jan 21 23:24:03 compute-0 sudo[79594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:03 compute-0 sudo[79594]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:03 compute-0 ceph-mon[74318]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:03 compute-0 sudo[79628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:03 compute-0 sudo[79628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:03 compute-0 sudo[79628]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:03 compute-0 sudo[79663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config
Jan 21 23:24:03 compute-0 sudo[79663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:03 compute-0 sudo[79663]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:03 compute-0 sudo[79688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:03 compute-0 sudo[79688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:03 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 23:24:03 compute-0 adoring_jemison[79513]: 
Jan 21 23:24:03 compute-0 adoring_jemison[79513]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 21 23:24:03 compute-0 sudo[79688]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:03 compute-0 systemd[1]: libpod-ad14156fc7320624a8518d8b051c2d96f43f854acf0c57fbb64bc25cce303c9f.scope: Deactivated successfully.
Jan 21 23:24:03 compute-0 podman[79451]: 2026-01-21 23:24:03.956134735 +0000 UTC m=+0.811886602 container died ad14156fc7320624a8518d8b051c2d96f43f854acf0c57fbb64bc25cce303c9f (image=quay.io/ceph/ceph:v18, name=adoring_jemison, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:24:04 compute-0 sudo[79715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf.new
Jan 21 23:24:04 compute-0 sudo[79715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:04 compute-0 sudo[79715]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:04 compute-0 sudo[79758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:04 compute-0 sudo[79758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:04 compute-0 sudo[79758]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-004b56516ffb5a203491d74869c6bc89561eae34dd42ac9e28a26f8612181461-merged.mount: Deactivated successfully.
Jan 21 23:24:04 compute-0 podman[79451]: 2026-01-21 23:24:04.183921039 +0000 UTC m=+1.039672876 container remove ad14156fc7320624a8518d8b051c2d96f43f854acf0c57fbb64bc25cce303c9f (image=quay.io/ceph/ceph:v18, name=adoring_jemison, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:24:04 compute-0 systemd[1]: libpod-conmon-ad14156fc7320624a8518d8b051c2d96f43f854acf0c57fbb64bc25cce303c9f.scope: Deactivated successfully.
Jan 21 23:24:04 compute-0 sudo[79802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:24:04 compute-0 sudo[79802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:04 compute-0 sudo[79802]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:04 compute-0 ansible-async_wrapper.py[79398]: Module complete (79398)
Jan 21 23:24:04 compute-0 sudo[79862]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyiqgqvnfxdptrzbktkkprvktxivxoph ; /usr/bin/python3'
Jan 21 23:24:04 compute-0 sudo[79862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:24:04 compute-0 sudo[79841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:04 compute-0 sudo[79841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:04 compute-0 sudo[79841]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:04 compute-0 sudo[79878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf.new
Jan 21 23:24:04 compute-0 sudo[79878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:04 compute-0 sudo[79878]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:04 compute-0 python3[79875]: ansible-ansible.legacy.async_status Invoked with jid=j457999791183.79344 mode=status _async_dir=/root/.ansible_async
Jan 21 23:24:04 compute-0 sudo[79862]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:04 compute-0 sudo[79926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:04 compute-0 sudo[79926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:04 compute-0 sudo[79926]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:04 compute-0 sudo[80015]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puhqcmwpfrqxfyapkglqoygtfuimcsrr ; /usr/bin/python3'
Jan 21 23:24:04 compute-0 sudo[79978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf.new
Jan 21 23:24:04 compute-0 sudo[80015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:24:04 compute-0 sudo[79978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:04 compute-0 sudo[79978]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:04 compute-0 sudo[80025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:04 compute-0 sudo[80025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:04 compute-0 sudo[80025]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:04 compute-0 python3[80023]: ansible-ansible.legacy.async_status Invoked with jid=j457999791183.79344 mode=cleanup _async_dir=/root/.ansible_async
Jan 21 23:24:04 compute-0 sudo[80015]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:04 compute-0 sudo[80050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf.new
Jan 21 23:24:04 compute-0 sudo[80050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:04 compute-0 sudo[80050]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:04 compute-0 ceph-mon[74318]: Updating compute-0:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:24:04 compute-0 ceph-mon[74318]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 23:24:04 compute-0 sudo[80075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:04 compute-0 sudo[80075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:04 compute-0 sudo[80075]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:04 compute-0 sudo[80100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf.new /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:24:04 compute-0 sudo[80100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:04 compute-0 sudo[80100]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:04 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 23:24:04 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 23:24:05 compute-0 sudo[80125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:05 compute-0 sudo[80125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:05 compute-0 sudo[80125]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:05 compute-0 sudo[80150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 21 23:24:05 compute-0 sudo[80150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:05 compute-0 sudo[80150]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:05 compute-0 sudo[80197]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skslyqadxpeftholyskpfqvkjothttoq ; /usr/bin/python3'
Jan 21 23:24:05 compute-0 sudo[80197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:24:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:05 compute-0 sudo[80200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:05 compute-0 sudo[80200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:05 compute-0 sudo[80200]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:05 compute-0 sudo[80226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph
Jan 21 23:24:05 compute-0 sudo[80226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:05 compute-0 sudo[80226]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:05 compute-0 python3[80202]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 23:24:05 compute-0 sudo[80197]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:05 compute-0 sudo[80252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:05 compute-0 sudo[80252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:05 compute-0 sudo[80252]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:05 compute-0 sudo[80278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph/ceph.client.admin.keyring.new
Jan 21 23:24:05 compute-0 sudo[80278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:05 compute-0 sudo[80278]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:05 compute-0 sudo[80303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:05 compute-0 sudo[80303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:05 compute-0 sudo[80303]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:05 compute-0 sudo[80328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:24:05 compute-0 sudo[80328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:05 compute-0 sudo[80328]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:05 compute-0 sudo[80353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:05 compute-0 sudo[80399]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcycignsadprdyxwimfsdsryoxgnvdbh ; /usr/bin/python3'
Jan 21 23:24:05 compute-0 sudo[80353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:05 compute-0 sudo[80399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:24:05 compute-0 sudo[80353]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:05 compute-0 ceph-mon[74318]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 23:24:05 compute-0 ceph-mon[74318]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:05 compute-0 sudo[80404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph/ceph.client.admin.keyring.new
Jan 21 23:24:05 compute-0 sudo[80404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:05 compute-0 sudo[80404]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:05 compute-0 python3[80403]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:24:05 compute-0 podman[80452]: 2026-01-21 23:24:05.955290523 +0000 UTC m=+0.055620121 container create 8d27242befe9fc317e7f59b437119a1ebe0fd4b16c359eb76a3df5b4da183ab7 (image=quay.io/ceph/ceph:v18, name=friendly_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:24:05 compute-0 sudo[80461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:05 compute-0 sudo[80461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:05 compute-0 sudo[80461]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:06 compute-0 systemd[1]: Started libpod-conmon-8d27242befe9fc317e7f59b437119a1ebe0fd4b16c359eb76a3df5b4da183ab7.scope.
Jan 21 23:24:06 compute-0 podman[80452]: 2026-01-21 23:24:05.925429241 +0000 UTC m=+0.025758869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:24:06 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d57038755753c88029ab05698c5134f1757413ece4a906b89c49d058fcab9d1e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d57038755753c88029ab05698c5134f1757413ece4a906b89c49d058fcab9d1e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d57038755753c88029ab05698c5134f1757413ece4a906b89c49d058fcab9d1e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:06 compute-0 podman[80452]: 2026-01-21 23:24:06.056814237 +0000 UTC m=+0.157143875 container init 8d27242befe9fc317e7f59b437119a1ebe0fd4b16c359eb76a3df5b4da183ab7 (image=quay.io/ceph/ceph:v18, name=friendly_satoshi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 21 23:24:06 compute-0 podman[80452]: 2026-01-21 23:24:06.067306178 +0000 UTC m=+0.167635766 container start 8d27242befe9fc317e7f59b437119a1ebe0fd4b16c359eb76a3df5b4da183ab7 (image=quay.io/ceph/ceph:v18, name=friendly_satoshi, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 21 23:24:06 compute-0 podman[80452]: 2026-01-21 23:24:06.07133728 +0000 UTC m=+0.171666868 container attach 8d27242befe9fc317e7f59b437119a1ebe0fd4b16c359eb76a3df5b4da183ab7 (image=quay.io/ceph/ceph:v18, name=friendly_satoshi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 23:24:06 compute-0 sudo[80493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph/ceph.client.admin.keyring.new
Jan 21 23:24:06 compute-0 sudo[80493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:06 compute-0 sudo[80493]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:06 compute-0 sudo[80522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:06 compute-0 sudo[80522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:06 compute-0 sudo[80522]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:06 compute-0 sudo[80547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph/ceph.client.admin.keyring.new
Jan 21 23:24:06 compute-0 sudo[80547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:06 compute-0 sudo[80547]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:06 compute-0 sudo[80572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:06 compute-0 sudo[80572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:06 compute-0 sudo[80572]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:06 compute-0 sudo[80597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 21 23:24:06 compute-0 sudo[80597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:06 compute-0 sudo[80597]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:06 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.client.admin.keyring
Jan 21 23:24:06 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.client.admin.keyring
Jan 21 23:24:06 compute-0 sudo[80641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:06 compute-0 sudo[80641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:06 compute-0 sudo[80641]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:06 compute-0 sudo[80666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config
Jan 21 23:24:06 compute-0 sudo[80666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:06 compute-0 sudo[80666]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:06 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 23:24:06 compute-0 friendly_satoshi[80495]: 
Jan 21 23:24:06 compute-0 friendly_satoshi[80495]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 21 23:24:06 compute-0 systemd[1]: libpod-8d27242befe9fc317e7f59b437119a1ebe0fd4b16c359eb76a3df5b4da183ab7.scope: Deactivated successfully.
Jan 21 23:24:06 compute-0 podman[80452]: 2026-01-21 23:24:06.621744648 +0000 UTC m=+0.722074236 container died 8d27242befe9fc317e7f59b437119a1ebe0fd4b16c359eb76a3df5b4da183ab7 (image=quay.io/ceph/ceph:v18, name=friendly_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Jan 21 23:24:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-d57038755753c88029ab05698c5134f1757413ece4a906b89c49d058fcab9d1e-merged.mount: Deactivated successfully.
Jan 21 23:24:06 compute-0 podman[80452]: 2026-01-21 23:24:06.681816364 +0000 UTC m=+0.782145932 container remove 8d27242befe9fc317e7f59b437119a1ebe0fd4b16c359eb76a3df5b4da183ab7 (image=quay.io/ceph/ceph:v18, name=friendly_satoshi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:24:06 compute-0 sudo[80693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:06 compute-0 systemd[1]: libpod-conmon-8d27242befe9fc317e7f59b437119a1ebe0fd4b16c359eb76a3df5b4da183ab7.scope: Deactivated successfully.
Jan 21 23:24:06 compute-0 sudo[80693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:06 compute-0 sudo[80693]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:06 compute-0 sudo[80399]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:06 compute-0 sudo[80731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config
Jan 21 23:24:06 compute-0 sudo[80731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:06 compute-0 sudo[80731]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:06 compute-0 sudo[80756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:06 compute-0 sudo[80756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:06 compute-0 sudo[80756]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:06 compute-0 sudo[80781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.client.admin.keyring.new
Jan 21 23:24:06 compute-0 sudo[80781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:06 compute-0 sudo[80781]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:06 compute-0 sudo[80806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:06 compute-0 sudo[80806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:06 compute-0 sudo[80806]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:07 compute-0 sudo[80877]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbhkzvbcbtxigcwlxsuhtrhjievbcyfd ; /usr/bin/python3'
Jan 21 23:24:07 compute-0 sudo[80877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:24:07 compute-0 sudo[80834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:24:07 compute-0 sudo[80834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:07 compute-0 sudo[80834]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:07 compute-0 sudo[80882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:24:07 compute-0 sudo[80882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:07 compute-0 sudo[80882]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:07 compute-0 ceph-mon[74318]: Updating compute-0:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.client.admin.keyring
Jan 21 23:24:07 compute-0 ceph-mon[74318]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 23:24:07 compute-0 ceph-mon[74318]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:07 compute-0 sudo[80907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.client.admin.keyring.new
Jan 21 23:24:07 compute-0 sudo[80907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:07 compute-0 sudo[80907]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:07 compute-0 python3[80880]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:24:07 compute-0 podman[80932]: 2026-01-21 23:24:07.336133789 +0000 UTC m=+0.068946310 container create 6ae6b149cc235c0c26b40932c5be3aac310980bc170ef25e06f06b35bf1deea4 (image=quay.io/ceph/ceph:v18, name=crazy_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:24:07 compute-0 systemd[1]: Started libpod-conmon-6ae6b149cc235c0c26b40932c5be3aac310980bc170ef25e06f06b35bf1deea4.scope.
Jan 21 23:24:07 compute-0 podman[80932]: 2026-01-21 23:24:07.309514164 +0000 UTC m=+0.042326715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:24:07 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6183388281f5236fcdb6e8a436b194dd46bf8db5b8ee3d6b14ce620befd33c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6183388281f5236fcdb6e8a436b194dd46bf8db5b8ee3d6b14ce620befd33c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6183388281f5236fcdb6e8a436b194dd46bf8db5b8ee3d6b14ce620befd33c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:07 compute-0 sudo[80968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:07 compute-0 sudo[80968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:07 compute-0 sudo[80968]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:07 compute-0 podman[80932]: 2026-01-21 23:24:07.435989031 +0000 UTC m=+0.168801552 container init 6ae6b149cc235c0c26b40932c5be3aac310980bc170ef25e06f06b35bf1deea4 (image=quay.io/ceph/ceph:v18, name=crazy_satoshi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 23:24:07 compute-0 podman[80932]: 2026-01-21 23:24:07.446958476 +0000 UTC m=+0.179770997 container start 6ae6b149cc235c0c26b40932c5be3aac310980bc170ef25e06f06b35bf1deea4 (image=quay.io/ceph/ceph:v18, name=crazy_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:24:07 compute-0 podman[80932]: 2026-01-21 23:24:07.451260157 +0000 UTC m=+0.184072678 container attach 6ae6b149cc235c0c26b40932c5be3aac310980bc170ef25e06f06b35bf1deea4 (image=quay.io/ceph/ceph:v18, name=crazy_satoshi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:24:07 compute-0 sudo[80999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.client.admin.keyring.new
Jan 21 23:24:07 compute-0 sudo[80999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:07 compute-0 sudo[80999]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:07 compute-0 sudo[81024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:07 compute-0 sudo[81024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:07 compute-0 sudo[81024]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:07 compute-0 sudo[81049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.client.admin.keyring.new
Jan 21 23:24:07 compute-0 sudo[81049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:07 compute-0 sudo[81049]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:07 compute-0 sudo[81074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:07 compute-0 sudo[81074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:07 compute-0 sudo[81074]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:07 compute-0 sudo[81101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.client.admin.keyring.new /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.client.admin.keyring
Jan 21 23:24:07 compute-0 sudo[81101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:07 compute-0 sudo[81101]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:24:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:24:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:24:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:07 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev 2ea1b53d-992c-475d-ac96-4a0c3160656c (Updating crash deployment (+1 -> 1))
Jan 21 23:24:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 21 23:24:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 23:24:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 21 23:24:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:24:07 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:07 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 21 23:24:07 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 21 23:24:07 compute-0 sudo[81143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:07 compute-0 sudo[81143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:07 compute-0 sudo[81143]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:07 compute-0 ansible-async_wrapper.py[79397]: Done in kid B.
Jan 21 23:24:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Jan 21 23:24:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/887478204' entity='client.admin' 
Jan 21 23:24:08 compute-0 systemd[1]: libpod-6ae6b149cc235c0c26b40932c5be3aac310980bc170ef25e06f06b35bf1deea4.scope: Deactivated successfully.
Jan 21 23:24:08 compute-0 podman[80932]: 2026-01-21 23:24:08.02471428 +0000 UTC m=+0.757526821 container died 6ae6b149cc235c0c26b40932c5be3aac310980bc170ef25e06f06b35bf1deea4 (image=quay.io/ceph/ceph:v18, name=crazy_satoshi, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:24:08 compute-0 sudo[81168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:24:08 compute-0 sudo[81168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:08 compute-0 sudo[81168]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f6183388281f5236fcdb6e8a436b194dd46bf8db5b8ee3d6b14ce620befd33c-merged.mount: Deactivated successfully.
Jan 21 23:24:08 compute-0 podman[80932]: 2026-01-21 23:24:08.073507041 +0000 UTC m=+0.806319522 container remove 6ae6b149cc235c0c26b40932c5be3aac310980bc170ef25e06f06b35bf1deea4 (image=quay.io/ceph/ceph:v18, name=crazy_satoshi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 21 23:24:08 compute-0 systemd[1]: libpod-conmon-6ae6b149cc235c0c26b40932c5be3aac310980bc170ef25e06f06b35bf1deea4.scope: Deactivated successfully.
Jan 21 23:24:08 compute-0 sudo[80877]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:08 compute-0 sudo[81207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:08 compute-0 sudo[81207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:08 compute-0 sudo[81207]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:08 compute-0 sudo[81232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:24:08 compute-0 sudo[81232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:08 compute-0 sudo[81280]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eywqxtyibkjdubkvwmkwsuncffebpljo ; /usr/bin/python3'
Jan 21 23:24:08 compute-0 sudo[81280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:24:08 compute-0 python3[81282]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:24:08 compute-0 podman[81313]: 2026-01-21 23:24:08.543005925 +0000 UTC m=+0.062654637 container create 33ba946ed2482c35d560c882f0ea5b594b0e312f91c1c3662225cdefa1ceeb59 (image=quay.io/ceph/ceph:v18, name=elegant_volhard, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 23:24:08 compute-0 podman[81333]: 2026-01-21 23:24:08.55726928 +0000 UTC m=+0.045084838 container create baa8d346d11a0bb5416330016defac0ea7079200edf1658711ab3860252c18d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 21 23:24:08 compute-0 systemd[1]: Started libpod-conmon-33ba946ed2482c35d560c882f0ea5b594b0e312f91c1c3662225cdefa1ceeb59.scope.
Jan 21 23:24:08 compute-0 systemd[1]: Started libpod-conmon-baa8d346d11a0bb5416330016defac0ea7079200edf1658711ab3860252c18d4.scope.
Jan 21 23:24:08 compute-0 podman[81313]: 2026-01-21 23:24:08.515942567 +0000 UTC m=+0.035591339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:24:08 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:24:08 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df4ecbd7d5d37fcc9e35f04223b9042c481311a2d8bf9d23f5bb30430962f893/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df4ecbd7d5d37fcc9e35f04223b9042c481311a2d8bf9d23f5bb30430962f893/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df4ecbd7d5d37fcc9e35f04223b9042c481311a2d8bf9d23f5bb30430962f893/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:08 compute-0 podman[81333]: 2026-01-21 23:24:08.536241378 +0000 UTC m=+0.024056946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:24:08 compute-0 podman[81313]: 2026-01-21 23:24:08.639920698 +0000 UTC m=+0.159569420 container init 33ba946ed2482c35d560c882f0ea5b594b0e312f91c1c3662225cdefa1ceeb59 (image=quay.io/ceph/ceph:v18, name=elegant_volhard, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 23:24:08 compute-0 podman[81333]: 2026-01-21 23:24:08.644606451 +0000 UTC m=+0.132421999 container init baa8d346d11a0bb5416330016defac0ea7079200edf1658711ab3860252c18d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_curie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 21 23:24:08 compute-0 podman[81313]: 2026-01-21 23:24:08.64816243 +0000 UTC m=+0.167811132 container start 33ba946ed2482c35d560c882f0ea5b594b0e312f91c1c3662225cdefa1ceeb59 (image=quay.io/ceph/ceph:v18, name=elegant_volhard, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:24:08 compute-0 podman[81333]: 2026-01-21 23:24:08.650906754 +0000 UTC m=+0.138722312 container start baa8d346d11a0bb5416330016defac0ea7079200edf1658711ab3860252c18d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:24:08 compute-0 podman[81313]: 2026-01-21 23:24:08.651487192 +0000 UTC m=+0.171136014 container attach 33ba946ed2482c35d560c882f0ea5b594b0e312f91c1c3662225cdefa1ceeb59 (image=quay.io/ceph/ceph:v18, name=elegant_volhard, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 21 23:24:08 compute-0 dazzling_curie[81356]: 167 167
Jan 21 23:24:08 compute-0 systemd[1]: libpod-baa8d346d11a0bb5416330016defac0ea7079200edf1658711ab3860252c18d4.scope: Deactivated successfully.
Jan 21 23:24:08 compute-0 podman[81333]: 2026-01-21 23:24:08.65604664 +0000 UTC m=+0.143862198 container attach baa8d346d11a0bb5416330016defac0ea7079200edf1658711ab3860252c18d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 23:24:08 compute-0 podman[81333]: 2026-01-21 23:24:08.656374491 +0000 UTC m=+0.144190039 container died baa8d346d11a0bb5416330016defac0ea7079200edf1658711ab3860252c18d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_curie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 23:24:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7630b69962fc1a888315ee4bbcaaf65b2659b26ee65823b8637da67083b97575-merged.mount: Deactivated successfully.
Jan 21 23:24:08 compute-0 podman[81333]: 2026-01-21 23:24:08.694785105 +0000 UTC m=+0.182600683 container remove baa8d346d11a0bb5416330016defac0ea7079200edf1658711ab3860252c18d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_curie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 23:24:08 compute-0 systemd[1]: libpod-conmon-baa8d346d11a0bb5416330016defac0ea7079200edf1658711ab3860252c18d4.scope: Deactivated successfully.
Jan 21 23:24:08 compute-0 systemd[1]: Reloading.
Jan 21 23:24:08 compute-0 systemd-rc-local-generator[81403]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:24:08 compute-0 systemd-sysv-generator[81406]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:24:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 23:24:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 21 23:24:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:08 compute-0 ceph-mon[74318]: Deploying daemon crash.compute-0 on compute-0
Jan 21 23:24:08 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/887478204' entity='client.admin' 
Jan 21 23:24:09 compute-0 systemd[1]: Reloading.
Jan 21 23:24:09 compute-0 systemd-rc-local-generator[81464]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:24:09 compute-0 systemd-sysv-generator[81467]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:24:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:24:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:24:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:24:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:24:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:24:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:24:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Jan 21 23:24:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/723558452' entity='client.admin' 
Jan 21 23:24:09 compute-0 podman[81313]: 2026-01-21 23:24:09.25099762 +0000 UTC m=+0.770646362 container died 33ba946ed2482c35d560c882f0ea5b594b0e312f91c1c3662225cdefa1ceeb59 (image=quay.io/ceph/ceph:v18, name=elegant_volhard, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 23:24:09 compute-0 systemd[1]: libpod-33ba946ed2482c35d560c882f0ea5b594b0e312f91c1c3662225cdefa1ceeb59.scope: Deactivated successfully.
Jan 21 23:24:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-df4ecbd7d5d37fcc9e35f04223b9042c481311a2d8bf9d23f5bb30430962f893-merged.mount: Deactivated successfully.
Jan 21 23:24:09 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 3759241a-7f1c-520d-ba17-879943ee2f00...
Jan 21 23:24:09 compute-0 podman[81313]: 2026-01-21 23:24:09.306050453 +0000 UTC m=+0.825699155 container remove 33ba946ed2482c35d560c882f0ea5b594b0e312f91c1c3662225cdefa1ceeb59 (image=quay.io/ceph/ceph:v18, name=elegant_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:24:09 compute-0 systemd[1]: libpod-conmon-33ba946ed2482c35d560c882f0ea5b594b0e312f91c1c3662225cdefa1ceeb59.scope: Deactivated successfully.
Jan 21 23:24:09 compute-0 sudo[81280]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:09 compute-0 podman[81534]: 2026-01-21 23:24:09.524322266 +0000 UTC m=+0.070969891 container create fccf1150c9b902bff82754a1e49332334e4b568f76298cf35adba0c878a592c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 23:24:09 compute-0 sudo[81570]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpmprovznybtsirhfuyxxzqxpquaumja ; /usr/bin/python3'
Jan 21 23:24:09 compute-0 sudo[81570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b03ecf96ebd9c88bb76b8322ceaff1275790740421d72d8bb4d20c7d096e3138/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b03ecf96ebd9c88bb76b8322ceaff1275790740421d72d8bb4d20c7d096e3138/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b03ecf96ebd9c88bb76b8322ceaff1275790740421d72d8bb4d20c7d096e3138/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b03ecf96ebd9c88bb76b8322ceaff1275790740421d72d8bb4d20c7d096e3138/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:09 compute-0 podman[81534]: 2026-01-21 23:24:09.494214205 +0000 UTC m=+0.040861880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:24:09 compute-0 podman[81534]: 2026-01-21 23:24:09.592084627 +0000 UTC m=+0.138732302 container init fccf1150c9b902bff82754a1e49332334e4b568f76298cf35adba0c878a592c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-crash-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:24:09 compute-0 podman[81534]: 2026-01-21 23:24:09.607451207 +0000 UTC m=+0.154098832 container start fccf1150c9b902bff82754a1e49332334e4b568f76298cf35adba0c878a592c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-crash-compute-0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 21 23:24:09 compute-0 bash[81534]: fccf1150c9b902bff82754a1e49332334e4b568f76298cf35adba0c878a592c9
Jan 21 23:24:09 compute-0 systemd[1]: Started Ceph crash.compute-0 for 3759241a-7f1c-520d-ba17-879943ee2f00.
Jan 21 23:24:09 compute-0 sudo[81232]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:24:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:24:09 compute-0 python3[81572]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:24:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 21 23:24:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:09 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev 2ea1b53d-992c-475d-ac96-4a0c3160656c (Updating crash deployment (+1 -> 1))
Jan 21 23:24:09 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event 2ea1b53d-992c-475d-ac96-4a0c3160656c (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 21 23:24:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 21 23:24:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:09 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev f4539daa-9628-491c-bd12-60377df410aa does not exist
Jan 21 23:24:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 21 23:24:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:09 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 269d0abf-5f8f-431d-8d9d-802ce5cfc48c does not exist
Jan 21 23:24:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 21 23:24:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:09 compute-0 podman[81580]: 2026-01-21 23:24:09.813049673 +0000 UTC m=+0.078854012 container create c6acc4ac97e57f8a28799cfa8972b55182f3a2c82a78afc813c08ca7b28c1e90 (image=quay.io/ceph/ceph:v18, name=cranky_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:24:09 compute-0 sudo[81586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:09 compute-0 sudo[81586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:09 compute-0 sudo[81586]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:09 compute-0 systemd[1]: Started libpod-conmon-c6acc4ac97e57f8a28799cfa8972b55182f3a2c82a78afc813c08ca7b28c1e90.scope.
Jan 21 23:24:09 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-crash-compute-0[81575]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 21 23:24:09 compute-0 ceph-mon[74318]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:09 compute-0 podman[81580]: 2026-01-21 23:24:09.783117548 +0000 UTC m=+0.048921937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:24:09 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/723558452' entity='client.admin' 
Jan 21 23:24:09 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:09 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:09 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:09 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:09 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:09 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:09 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6799e5bc3ff4d8a7669c94da3bcf4db4ce43a9aaa19f23cd4dad2a9c1bb1116b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6799e5bc3ff4d8a7669c94da3bcf4db4ce43a9aaa19f23cd4dad2a9c1bb1116b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6799e5bc3ff4d8a7669c94da3bcf4db4ce43a9aaa19f23cd4dad2a9c1bb1116b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:09 compute-0 podman[81580]: 2026-01-21 23:24:09.910945475 +0000 UTC m=+0.176749824 container init c6acc4ac97e57f8a28799cfa8972b55182f3a2c82a78afc813c08ca7b28c1e90 (image=quay.io/ceph/ceph:v18, name=cranky_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Jan 21 23:24:09 compute-0 podman[81580]: 2026-01-21 23:24:09.924649014 +0000 UTC m=+0.190453353 container start c6acc4ac97e57f8a28799cfa8972b55182f3a2c82a78afc813c08ca7b28c1e90 (image=quay.io/ceph/ceph:v18, name=cranky_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 21 23:24:09 compute-0 podman[81580]: 2026-01-21 23:24:09.928265185 +0000 UTC m=+0.194069534 container attach c6acc4ac97e57f8a28799cfa8972b55182f3a2c82a78afc813c08ca7b28c1e90 (image=quay.io/ceph/ceph:v18, name=cranky_edison, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 21 23:24:09 compute-0 sudo[81619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:24:09 compute-0 sudo[81619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:09 compute-0 sudo[81619]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:10 compute-0 sudo[81650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:10 compute-0 sudo[81650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:10 compute-0 sudo[81650]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:10 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-crash-compute-0[81575]: 2026-01-21T23:24:10.075+0000 7fcdfe4da640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 21 23:24:10 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-crash-compute-0[81575]: 2026-01-21T23:24:10.075+0000 7fcdfe4da640 -1 AuthRegistry(0x7fcdf8066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 21 23:24:10 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-crash-compute-0[81575]: 2026-01-21T23:24:10.076+0000 7fcdfe4da640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 21 23:24:10 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-crash-compute-0[81575]: 2026-01-21T23:24:10.076+0000 7fcdfe4da640 -1 AuthRegistry(0x7fcdfe4d9000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 21 23:24:10 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-crash-compute-0[81575]: 2026-01-21T23:24:10.079+0000 7fcdfd4d8640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 21 23:24:10 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-crash-compute-0[81575]: 2026-01-21T23:24:10.079+0000 7fcdfe4da640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 21 23:24:10 compute-0 sudo[81675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:24:10 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-crash-compute-0[81575]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 21 23:24:10 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-crash-compute-0[81575]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 21 23:24:10 compute-0 sudo[81675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:10 compute-0 sudo[81675]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:10 compute-0 sudo[81710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:10 compute-0 sudo[81710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:10 compute-0 sudo[81710]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:10 compute-0 sudo[81735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 21 23:24:10 compute-0 sudo[81735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Jan 21 23:24:10 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1558132561' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 21 23:24:10 compute-0 podman[81849]: 2026-01-21 23:24:10.840356849 +0000 UTC m=+0.084669129 container exec 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 21 23:24:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 21 23:24:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 23:24:10 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1558132561' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 21 23:24:10 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1558132561' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 21 23:24:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 21 23:24:10 compute-0 cranky_edison[81623]: set require_min_compat_client to mimic
Jan 21 23:24:10 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 21 23:24:10 compute-0 systemd[1]: libpod-c6acc4ac97e57f8a28799cfa8972b55182f3a2c82a78afc813c08ca7b28c1e90.scope: Deactivated successfully.
Jan 21 23:24:10 compute-0 podman[81580]: 2026-01-21 23:24:10.929142203 +0000 UTC m=+1.194946532 container died c6acc4ac97e57f8a28799cfa8972b55182f3a2c82a78afc813c08ca7b28c1e90 (image=quay.io/ceph/ceph:v18, name=cranky_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 21 23:24:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6799e5bc3ff4d8a7669c94da3bcf4db4ce43a9aaa19f23cd4dad2a9c1bb1116b-merged.mount: Deactivated successfully.
Jan 21 23:24:10 compute-0 podman[81580]: 2026-01-21 23:24:10.981401132 +0000 UTC m=+1.247205441 container remove c6acc4ac97e57f8a28799cfa8972b55182f3a2c82a78afc813c08ca7b28c1e90 (image=quay.io/ceph/ceph:v18, name=cranky_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 21 23:24:10 compute-0 podman[81849]: 2026-01-21 23:24:10.988802128 +0000 UTC m=+0.233114368 container exec_died 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:24:10 compute-0 systemd[1]: libpod-conmon-c6acc4ac97e57f8a28799cfa8972b55182f3a2c82a78afc813c08ca7b28c1e90.scope: Deactivated successfully.
Jan 21 23:24:11 compute-0 sudo[81570]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:11 compute-0 sudo[81735]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:24:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:24:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:24:11 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:24:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:24:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:24:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:11 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 31faad8c-23d8-4b76-8e0a-4700989eb0ec does not exist
Jan 21 23:24:11 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 989930b6-800e-4741-b4d8-b61d5bbee4fd does not exist
Jan 21 23:24:11 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 571b7e57-b117-44ac-89d6-b7614671ba7e does not exist
Jan 21 23:24:11 compute-0 sudo[81930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:11 compute-0 sudo[81930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:11 compute-0 sudo[81930]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:11 compute-0 sudo[81955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:24:11 compute-0 sudo[81955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:11 compute-0 sudo[81955]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Jan 21 23:24:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Jan 21 23:24:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Jan 21 23:24:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Jan 21 23:24:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:11 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 21 23:24:11 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 21 23:24:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 21 23:24:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 23:24:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 21 23:24:11 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 21 23:24:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:24:11 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:11 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 23:24:11 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 23:24:11 compute-0 sudo[82004]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhgporauggjrwldnicysfvrolwpeulbx ; /usr/bin/python3'
Jan 21 23:24:11 compute-0 sudo[82004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:24:11 compute-0 sudo[82003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:11 compute-0 sudo[82003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:11 compute-0 sudo[82003]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:11 compute-0 sudo[82031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:24:11 compute-0 sudo[82031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:11 compute-0 sudo[82031]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:11 compute-0 python3[82021]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:24:11 compute-0 podman[82059]: 2026-01-21 23:24:11.734506075 +0000 UTC m=+0.050280339 container create c05eb8fcfc8ea7fbe9bdba39f6ade610e8889c3f33b5bf2e4e1434982eeebc06 (image=quay.io/ceph/ceph:v18, name=mystifying_goldstine, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 21 23:24:11 compute-0 sudo[82056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:11 compute-0 sudo[82056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:11 compute-0 sudo[82056]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:11 compute-0 systemd[1]: Started libpod-conmon-c05eb8fcfc8ea7fbe9bdba39f6ade610e8889c3f33b5bf2e4e1434982eeebc06.scope.
Jan 21 23:24:11 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9c8400f6665a6d2a91f74ed5b87dd9016dc743b582cc941a82feaf9165e790/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9c8400f6665a6d2a91f74ed5b87dd9016dc743b582cc941a82feaf9165e790/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9c8400f6665a6d2a91f74ed5b87dd9016dc743b582cc941a82feaf9165e790/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:11 compute-0 podman[82059]: 2026-01-21 23:24:11.710811351 +0000 UTC m=+0.026585705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:24:11 compute-0 podman[82059]: 2026-01-21 23:24:11.816409279 +0000 UTC m=+0.132183593 container init c05eb8fcfc8ea7fbe9bdba39f6ade610e8889c3f33b5bf2e4e1434982eeebc06 (image=quay.io/ceph/ceph:v18, name=mystifying_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 21 23:24:11 compute-0 sudo[82097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:24:11 compute-0 sudo[82097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:11 compute-0 podman[82059]: 2026-01-21 23:24:11.822778314 +0000 UTC m=+0.138552578 container start c05eb8fcfc8ea7fbe9bdba39f6ade610e8889c3f33b5bf2e4e1434982eeebc06 (image=quay.io/ceph/ceph:v18, name=mystifying_goldstine, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:24:11 compute-0 podman[82059]: 2026-01-21 23:24:11.826644562 +0000 UTC m=+0.142418996 container attach c05eb8fcfc8ea7fbe9bdba39f6ade610e8889c3f33b5bf2e4e1434982eeebc06 (image=quay.io/ceph/ceph:v18, name=mystifying_goldstine, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:24:11 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1558132561' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 21 23:24:11 compute-0 ceph-mon[74318]: osdmap e3: 0 total, 0 up, 0 in
Jan 21 23:24:11 compute-0 ceph-mon[74318]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:24:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 23:24:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 21 23:24:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:12 compute-0 podman[82143]: 2026-01-21 23:24:12.057316684 +0000 UTC m=+0.055147367 container create 7f508bf50b40fa4045dbb99c019b46d750f927a56b44547fb7b0d2315f713f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermi, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 23:24:12 compute-0 systemd[1]: Started libpod-conmon-7f508bf50b40fa4045dbb99c019b46d750f927a56b44547fb7b0d2315f713f1a.scope.
Jan 21 23:24:12 compute-0 podman[82143]: 2026-01-21 23:24:12.02774663 +0000 UTC m=+0.025577363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:24:12 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:24:12 compute-0 podman[82143]: 2026-01-21 23:24:12.137048212 +0000 UTC m=+0.134878895 container init 7f508bf50b40fa4045dbb99c019b46d750f927a56b44547fb7b0d2315f713f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermi, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 23:24:12 compute-0 podman[82143]: 2026-01-21 23:24:12.147023196 +0000 UTC m=+0.144853859 container start 7f508bf50b40fa4045dbb99c019b46d750f927a56b44547fb7b0d2315f713f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 23:24:12 compute-0 podman[82143]: 2026-01-21 23:24:12.15009904 +0000 UTC m=+0.147929703 container attach 7f508bf50b40fa4045dbb99c019b46d750f927a56b44547fb7b0d2315f713f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:24:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:24:12 compute-0 jolly_fermi[82160]: 167 167
Jan 21 23:24:12 compute-0 systemd[1]: libpod-7f508bf50b40fa4045dbb99c019b46d750f927a56b44547fb7b0d2315f713f1a.scope: Deactivated successfully.
Jan 21 23:24:12 compute-0 conmon[82160]: conmon 7f508bf50b40fa4045db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f508bf50b40fa4045dbb99c019b46d750f927a56b44547fb7b0d2315f713f1a.scope/container/memory.events
Jan 21 23:24:12 compute-0 podman[82175]: 2026-01-21 23:24:12.201909215 +0000 UTC m=+0.030943577 container died 7f508bf50b40fa4045dbb99c019b46d750f927a56b44547fb7b0d2315f713f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:24:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d5ae79ae0528403d452ebdd06be9f5138f6ddbc09ab5ab6490500f643a7f0e0-merged.mount: Deactivated successfully.
Jan 21 23:24:12 compute-0 podman[82175]: 2026-01-21 23:24:12.251073407 +0000 UTC m=+0.080107729 container remove 7f508bf50b40fa4045dbb99c019b46d750f927a56b44547fb7b0d2315f713f1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 21 23:24:12 compute-0 systemd[1]: libpod-conmon-7f508bf50b40fa4045dbb99c019b46d750f927a56b44547fb7b0d2315f713f1a.scope: Deactivated successfully.
Jan 21 23:24:12 compute-0 sudo[82097]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:24:12 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:24:12 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:12 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.boqcsl (unknown last config time)...
Jan 21 23:24:12 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.boqcsl (unknown last config time)...
Jan 21 23:24:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.boqcsl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 21 23:24:12 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.boqcsl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 23:24:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 21 23:24:12 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 21 23:24:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:24:12 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:12 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.boqcsl on compute-0
Jan 21 23:24:12 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.boqcsl on compute-0
Jan 21 23:24:12 compute-0 sudo[82201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:12 compute-0 sudo[82201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:12 compute-0 sudo[82201]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:12 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:24:12 compute-0 sudo[82227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:12 compute-0 sudo[82227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:12 compute-0 sudo[82227]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:12 compute-0 sudo[82228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:24:12 compute-0 sudo[82228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:12 compute-0 sudo[82228]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:12 compute-0 sudo[82277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:24:12 compute-0 sudo[82277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:12 compute-0 sudo[82277]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:12 compute-0 sudo[82281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:12 compute-0 sudo[82281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:12 compute-0 sudo[82281]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:12 compute-0 sudo[82327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:12 compute-0 sudo[82327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:12 compute-0 sudo[82327]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:12 compute-0 sudo[82332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:24:12 compute-0 sudo[82332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:12 compute-0 sudo[82376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Jan 21 23:24:12 compute-0 sudo[82376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:12 compute-0 ceph-mon[74318]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 21 23:24:12 compute-0 ceph-mon[74318]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 23:24:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.boqcsl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 23:24:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 21 23:24:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:13 compute-0 podman[82425]: 2026-01-21 23:24:13.026208005 +0000 UTC m=+0.064980588 container create 0940b59fe0d660da6b9d33436aecb741f09a04bae6780e42d58ecf2a2c5f268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 23:24:13 compute-0 systemd[1]: Started libpod-conmon-0940b59fe0d660da6b9d33436aecb741f09a04bae6780e42d58ecf2a2c5f268c.scope.
Jan 21 23:24:13 compute-0 podman[82425]: 2026-01-21 23:24:12.998387394 +0000 UTC m=+0.037160017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:24:13 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:24:13 compute-0 sudo[82376]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 21 23:24:13 compute-0 podman[82425]: 2026-01-21 23:24:13.124433268 +0000 UTC m=+0.163205851 container init 0940b59fe0d660da6b9d33436aecb741f09a04bae6780e42d58ecf2a2c5f268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:24:13 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 21 23:24:13 compute-0 podman[82425]: 2026-01-21 23:24:13.135074553 +0000 UTC m=+0.173847126 container start 0940b59fe0d660da6b9d33436aecb741f09a04bae6780e42d58ecf2a2c5f268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:24:13 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 21 23:24:13 compute-0 vigorous_rosalind[82454]: 167 167
Jan 21 23:24:13 compute-0 systemd[1]: libpod-0940b59fe0d660da6b9d33436aecb741f09a04bae6780e42d58ecf2a2c5f268c.scope: Deactivated successfully.
Jan 21 23:24:13 compute-0 podman[82425]: 2026-01-21 23:24:13.142200381 +0000 UTC m=+0.180972964 container attach 0940b59fe0d660da6b9d33436aecb741f09a04bae6780e42d58ecf2a2c5f268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rosalind, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:24:13 compute-0 podman[82425]: 2026-01-21 23:24:13.143063677 +0000 UTC m=+0.181836260 container died 0940b59fe0d660da6b9d33436aecb741f09a04bae6780e42d58ecf2a2c5f268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rosalind, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 23:24:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:13 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 21 23:24:13 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:13 compute-0 ceph-mgr[74614]: [cephadm INFO root] Added host compute-0
Jan 21 23:24:13 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 21 23:24:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5892f2a13b83f3ae02209a720e362a34b7970044bd2885235149bd696f992163-merged.mount: Deactivated successfully.
Jan 21 23:24:13 compute-0 podman[82425]: 2026-01-21 23:24:13.192327464 +0000 UTC m=+0.231100017 container remove 0940b59fe0d660da6b9d33436aecb741f09a04bae6780e42d58ecf2a2c5f268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rosalind, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 23:24:13 compute-0 systemd[1]: libpod-conmon-0940b59fe0d660da6b9d33436aecb741f09a04bae6780e42d58ecf2a2c5f268c.scope: Deactivated successfully.
Jan 21 23:24:13 compute-0 sudo[82332]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:24:13 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:24:13 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:24:13 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:24:13 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:24:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:24:13 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:13 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 81289467-e5f6-456f-82d7-8a7a42113b6b does not exist
Jan 21 23:24:13 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 4a7a19c4-44cf-4f5d-97be-b77bda835445 does not exist
Jan 21 23:24:13 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 9404a444-b69f-4bc2-8761-5206f8b924d9 does not exist
Jan 21 23:24:13 compute-0 sudo[82472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:13 compute-0 sudo[82472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:13 compute-0 sudo[82472]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:13 compute-0 sudo[82497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:24:13 compute-0 sudo[82497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:13 compute-0 sudo[82497]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:14 compute-0 ceph-mon[74318]: Reconfiguring mgr.compute-0.boqcsl (unknown last config time)...
Jan 21 23:24:14 compute-0 ceph-mon[74318]: Reconfiguring daemon mgr.compute-0.boqcsl on compute-0
Jan 21 23:24:14 compute-0 ceph-mon[74318]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:24:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:14 compute-0 ceph-mon[74318]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:24:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:14 compute-0 ceph-mgr[74614]: [progress INFO root] Writing back 1 completed events
Jan 21 23:24:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 21 23:24:14 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:14 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Jan 21 23:24:14 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Jan 21 23:24:15 compute-0 ceph-mon[74318]: Added host compute-0
Jan 21 23:24:15 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:16 compute-0 ceph-mon[74318]: Deploying cephadm binary to compute-1
Jan 21 23:24:16 compute-0 ceph-mon[74318]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:24:17 compute-0 ceph-mon[74318]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 21 23:24:18 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:18 compute-0 ceph-mgr[74614]: [cephadm INFO root] Added host compute-1
Jan 21 23:24:18 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Added host compute-1
Jan 21 23:24:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:24:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:19 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:19 compute-0 ceph-mon[74318]: Added host compute-1
Jan 21 23:24:19 compute-0 ceph-mon[74318]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:19 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:24:20 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:20 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Jan 21 23:24:20 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Jan 21 23:24:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:21 compute-0 ceph-mon[74318]: Deploying cephadm binary to compute-2
Jan 21 23:24:21 compute-0 ceph-mon[74318]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:21 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:24:21 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:24:22 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:23 compute-0 ceph-mon[74318]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 21 23:24:24 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:24 compute-0 ceph-mgr[74614]: [cephadm INFO root] Added host compute-2
Jan 21 23:24:24 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Added host compute-2
Jan 21 23:24:24 compute-0 ceph-mgr[74614]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 21 23:24:24 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 21 23:24:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 21 23:24:24 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:24 compute-0 ceph-mgr[74614]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 21 23:24:24 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 21 23:24:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 21 23:24:24 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:24 compute-0 ceph-mgr[74614]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 21 23:24:24 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 21 23:24:24 compute-0 ceph-mgr[74614]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Jan 21 23:24:24 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Jan 21 23:24:24 compute-0 ceph-mgr[74614]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 21 23:24:24 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 21 23:24:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Jan 21 23:24:24 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:24 compute-0 mystifying_goldstine[82105]: Added host 'compute-0' with addr '192.168.122.100'
Jan 21 23:24:24 compute-0 mystifying_goldstine[82105]: Added host 'compute-1' with addr '192.168.122.101'
Jan 21 23:24:24 compute-0 mystifying_goldstine[82105]: Added host 'compute-2' with addr '192.168.122.102'
Jan 21 23:24:24 compute-0 mystifying_goldstine[82105]: Scheduled mon update...
Jan 21 23:24:24 compute-0 mystifying_goldstine[82105]: Scheduled mgr update...
Jan 21 23:24:24 compute-0 mystifying_goldstine[82105]: Scheduled osd.default_drive_group update...
Jan 21 23:24:24 compute-0 systemd[1]: libpod-c05eb8fcfc8ea7fbe9bdba39f6ade610e8889c3f33b5bf2e4e1434982eeebc06.scope: Deactivated successfully.
Jan 21 23:24:24 compute-0 podman[82059]: 2026-01-21 23:24:24.819146063 +0000 UTC m=+13.134920367 container died c05eb8fcfc8ea7fbe9bdba39f6ade610e8889c3f33b5bf2e4e1434982eeebc06 (image=quay.io/ceph/ceph:v18, name=mystifying_goldstine, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 23:24:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b9c8400f6665a6d2a91f74ed5b87dd9016dc743b582cc941a82feaf9165e790-merged.mount: Deactivated successfully.
Jan 21 23:24:24 compute-0 podman[82059]: 2026-01-21 23:24:24.878776033 +0000 UTC m=+13.194550307 container remove c05eb8fcfc8ea7fbe9bdba39f6ade610e8889c3f33b5bf2e4e1434982eeebc06 (image=quay.io/ceph/ceph:v18, name=mystifying_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:24:24 compute-0 systemd[1]: libpod-conmon-c05eb8fcfc8ea7fbe9bdba39f6ade610e8889c3f33b5bf2e4e1434982eeebc06.scope: Deactivated successfully.
Jan 21 23:24:24 compute-0 sudo[82004]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:25 compute-0 sudo[82558]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bieqisyiydutvdveirkvaotzvdyyblvc ; /usr/bin/python3'
Jan 21 23:24:25 compute-0 sudo[82558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:24:25 compute-0 python3[82560]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:24:25 compute-0 podman[82562]: 2026-01-21 23:24:25.412731282 +0000 UTC m=+0.055108530 container create d3f275818b620947451d365dbf54e5a883e035a52aefcd43a4d1bf3fdd3f07ec (image=quay.io/ceph/ceph:v18, name=sleepy_kare, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 21 23:24:25 compute-0 systemd[1]: Started libpod-conmon-d3f275818b620947451d365dbf54e5a883e035a52aefcd43a4d1bf3fdd3f07ec.scope.
Jan 21 23:24:25 compute-0 podman[82562]: 2026-01-21 23:24:25.382991044 +0000 UTC m=+0.025368342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:24:25 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a15f906afaa1b7d984e91271a04c2175dba8898a1447ac8cfb42ceeeacfa4d1a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a15f906afaa1b7d984e91271a04c2175dba8898a1447ac8cfb42ceeeacfa4d1a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a15f906afaa1b7d984e91271a04c2175dba8898a1447ac8cfb42ceeeacfa4d1a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:25 compute-0 podman[82562]: 2026-01-21 23:24:25.508496331 +0000 UTC m=+0.150873619 container init d3f275818b620947451d365dbf54e5a883e035a52aefcd43a4d1bf3fdd3f07ec (image=quay.io/ceph/ceph:v18, name=sleepy_kare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:24:25 compute-0 podman[82562]: 2026-01-21 23:24:25.516654189 +0000 UTC m=+0.159031427 container start d3f275818b620947451d365dbf54e5a883e035a52aefcd43a4d1bf3fdd3f07ec (image=quay.io/ceph/ceph:v18, name=sleepy_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 23:24:25 compute-0 podman[82562]: 2026-01-21 23:24:25.520911733 +0000 UTC m=+0.163288981 container attach d3f275818b620947451d365dbf54e5a883e035a52aefcd43a4d1bf3fdd3f07ec (image=quay.io/ceph/ceph:v18, name=sleepy_kare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:24:25 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:25 compute-0 ceph-mon[74318]: Added host compute-2
Jan 21 23:24:25 compute-0 ceph-mon[74318]: Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 21 23:24:25 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:25 compute-0 ceph-mon[74318]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 21 23:24:25 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:25 compute-0 ceph-mon[74318]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 21 23:24:25 compute-0 ceph-mon[74318]: Marking host: compute-1 for OSDSpec preview refresh.
Jan 21 23:24:25 compute-0 ceph-mon[74318]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 21 23:24:25 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:25 compute-0 ceph-mon[74318]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:26 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 21 23:24:26 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/878348035' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 21 23:24:26 compute-0 sleepy_kare[82579]: 
Jan 21 23:24:26 compute-0 sleepy_kare[82579]: {"fsid":"3759241a-7f1c-520d-ba17-879943ee2f00","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":93,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-21T23:22:49.246100+0000","services":{}},"progress_events":{}}
Jan 21 23:24:26 compute-0 systemd[1]: libpod-d3f275818b620947451d365dbf54e5a883e035a52aefcd43a4d1bf3fdd3f07ec.scope: Deactivated successfully.
Jan 21 23:24:26 compute-0 podman[82562]: 2026-01-21 23:24:26.135938208 +0000 UTC m=+0.778315416 container died d3f275818b620947451d365dbf54e5a883e035a52aefcd43a4d1bf3fdd3f07ec (image=quay.io/ceph/ceph:v18, name=sleepy_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:24:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a15f906afaa1b7d984e91271a04c2175dba8898a1447ac8cfb42ceeeacfa4d1a-merged.mount: Deactivated successfully.
Jan 21 23:24:26 compute-0 podman[82562]: 2026-01-21 23:24:26.179618416 +0000 UTC m=+0.821995624 container remove d3f275818b620947451d365dbf54e5a883e035a52aefcd43a4d1bf3fdd3f07ec (image=quay.io/ceph/ceph:v18, name=sleepy_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 21 23:24:26 compute-0 systemd[1]: libpod-conmon-d3f275818b620947451d365dbf54e5a883e035a52aefcd43a4d1bf3fdd3f07ec.scope: Deactivated successfully.
Jan 21 23:24:26 compute-0 sudo[82558]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/878348035' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 21 23:24:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:24:27 compute-0 ceph-mon[74318]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:29 compute-0 ceph-mon[74318]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:31 compute-0 ceph-mon[74318]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:24:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:33 compute-0 ceph-mon[74318]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:35 compute-0 ceph-mon[74318]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:24:37 compute-0 ceph-mon[74318]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:24:39
Jan 21 23:24:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:24:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:24:39 compute-0 ceph-mgr[74614]: [balancer INFO root] No pools available
Jan 21 23:24:39 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:24:39 compute-0 ceph-mon[74318]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:24:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:24:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:24:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:24:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:24:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:24:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:24:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:24:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:41 compute-0 ceph-mon[74318]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:24:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:43 compute-0 ceph-mon[74318]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:24:44 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:24:44 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:24:44 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:24:44 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 21 23:24:44 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 21 23:24:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:24:44 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:24:44 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:24:44 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 21 23:24:44 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 21 23:24:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 21 23:24:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:24:45 compute-0 ceph-mon[74318]: Updating compute-1:/etc/ceph/ceph.conf
Jan 21 23:24:45 compute-0 ceph-mon[74318]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:45 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:24:45 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:24:46 compute-0 ceph-mon[74318]: Updating compute-1:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:24:47 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 21 23:24:47 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 21 23:24:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:24:47 compute-0 ceph-mon[74318]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 21 23:24:47 compute-0 ceph-mon[74318]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:48 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.client.admin.keyring
Jan 21 23:24:48 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.client.admin.keyring
Jan 21 23:24:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:49 compute-0 ceph-mon[74318]: Updating compute-1:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.client.admin.keyring
Jan 21 23:24:49 compute-0 ceph-mon[74318]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:24:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:24:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:24:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:49 compute-0 ceph-mgr[74614]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 21 23:24:49 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 21 23:24:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:49 compute-0 ceph-mgr[74614]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 21 23:24:49 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 21 23:24:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:49 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev 2b97670e-e031-4a3d-8973-ec542baf96d3 (Updating crash deployment (+1 -> 2))
Jan 21 23:24:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 21 23:24:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 23:24:49 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:24:49.584+0000 7fbf45a77640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Jan 21 23:24:49 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: service_name: mon
Jan 21 23:24:49 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: placement:
Jan 21 23:24:49 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]:   hosts:
Jan 21 23:24:49 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]:   - compute-0
Jan 21 23:24:49 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]:   - compute-1
Jan 21 23:24:49 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]:   - compute-2
Jan 21 23:24:49 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 21 23:24:49 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:24:49.585+0000 7fbf45a77640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Jan 21 23:24:49 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: service_name: mgr
Jan 21 23:24:49 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: placement:
Jan 21 23:24:49 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]:   hosts:
Jan 21 23:24:49 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]:   - compute-0
Jan 21 23:24:49 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]:   - compute-1
Jan 21 23:24:49 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]:   - compute-2
Jan 21 23:24:49 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 21 23:24:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 21 23:24:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:24:49 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:49 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Jan 21 23:24:49 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Jan 21 23:24:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:50 compute-0 ceph-mon[74318]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 21 23:24:50 compute-0 ceph-mon[74318]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:50 compute-0 ceph-mon[74318]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 21 23:24:50 compute-0 ceph-mon[74318]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 23:24:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 21 23:24:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:50 compute-0 ceph-mon[74318]: Deploying daemon crash.compute-1 on compute-1
Jan 21 23:24:50 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 21 23:24:51 compute-0 ceph-mon[74318]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 21 23:24:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:24:52 compute-0 ceph-mon[74318]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:24:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:24:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 21 23:24:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:52 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev 2b97670e-e031-4a3d-8973-ec542baf96d3 (Updating crash deployment (+1 -> 2))
Jan 21 23:24:52 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event 2b97670e-e031-4a3d-8973-ec542baf96d3 (Updating crash deployment (+1 -> 2)) in 3 seconds
Jan 21 23:24:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 21 23:24:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:24:52 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:24:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:24:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:24:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:24:52 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:24:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:24:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:24:52 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:52 compute-0 sudo[82615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:52 compute-0 sudo[82615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:52 compute-0 sudo[82615]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:52 compute-0 sudo[82640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:24:52 compute-0 sudo[82640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:52 compute-0 sudo[82640]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:52 compute-0 sudo[82665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:52 compute-0 sudo[82665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:52 compute-0 sudo[82665]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:53 compute-0 sudo[82690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:24:53 compute-0 sudo[82690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:53 compute-0 podman[82753]: 2026-01-21 23:24:53.383631137 +0000 UTC m=+0.028995776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:24:53 compute-0 podman[82753]: 2026-01-21 23:24:53.547112451 +0000 UTC m=+0.192477040 container create 3b69a5d04fc39856bc6bd79dc8a6f62e3df114d98f4bbaf2bb1f1e86c9ff697e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_haslett, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:24:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:53 compute-0 systemd[1]: Started libpod-conmon-3b69a5d04fc39856bc6bd79dc8a6f62e3df114d98f4bbaf2bb1f1e86c9ff697e.scope.
Jan 21 23:24:53 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:24:53 compute-0 podman[82753]: 2026-01-21 23:24:53.653789356 +0000 UTC m=+0.299153925 container init 3b69a5d04fc39856bc6bd79dc8a6f62e3df114d98f4bbaf2bb1f1e86c9ff697e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_haslett, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 23:24:53 compute-0 podman[82753]: 2026-01-21 23:24:53.662089918 +0000 UTC m=+0.307454507 container start 3b69a5d04fc39856bc6bd79dc8a6f62e3df114d98f4bbaf2bb1f1e86c9ff697e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 21 23:24:53 compute-0 podman[82753]: 2026-01-21 23:24:53.66593258 +0000 UTC m=+0.311297129 container attach 3b69a5d04fc39856bc6bd79dc8a6f62e3df114d98f4bbaf2bb1f1e86c9ff697e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_haslett, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:24:53 compute-0 beautiful_haslett[82768]: 167 167
Jan 21 23:24:53 compute-0 systemd[1]: libpod-3b69a5d04fc39856bc6bd79dc8a6f62e3df114d98f4bbaf2bb1f1e86c9ff697e.scope: Deactivated successfully.
Jan 21 23:24:53 compute-0 podman[82753]: 2026-01-21 23:24:53.668592183 +0000 UTC m=+0.313956732 container died 3b69a5d04fc39856bc6bd79dc8a6f62e3df114d98f4bbaf2bb1f1e86c9ff697e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_haslett, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:24:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f12e57cd277528e925637865cc1b0153f0ac812c81e85fbdff0bc572ee326fed-merged.mount: Deactivated successfully.
Jan 21 23:24:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:24:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:24:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:24:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:24:53 compute-0 podman[82753]: 2026-01-21 23:24:53.712170657 +0000 UTC m=+0.357535206 container remove 3b69a5d04fc39856bc6bd79dc8a6f62e3df114d98f4bbaf2bb1f1e86c9ff697e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:24:53 compute-0 systemd[1]: libpod-conmon-3b69a5d04fc39856bc6bd79dc8a6f62e3df114d98f4bbaf2bb1f1e86c9ff697e.scope: Deactivated successfully.
Jan 21 23:24:53 compute-0 podman[82793]: 2026-01-21 23:24:53.888202958 +0000 UTC m=+0.052721813 container create 3e84084c0942fe5cb60236550a807b588d7ffe9387df9c433d36bf354868c57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mclaren, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:24:53 compute-0 systemd[1]: Started libpod-conmon-3e84084c0942fe5cb60236550a807b588d7ffe9387df9c433d36bf354868c57f.scope.
Jan 21 23:24:53 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:24:53 compute-0 podman[82793]: 2026-01-21 23:24:53.866523665 +0000 UTC m=+0.031042570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:24:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c33870cce38404518bc1b44bb1adc69457e30ee47bb830671e85e8fbd9054385/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c33870cce38404518bc1b44bb1adc69457e30ee47bb830671e85e8fbd9054385/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c33870cce38404518bc1b44bb1adc69457e30ee47bb830671e85e8fbd9054385/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c33870cce38404518bc1b44bb1adc69457e30ee47bb830671e85e8fbd9054385/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c33870cce38404518bc1b44bb1adc69457e30ee47bb830671e85e8fbd9054385/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:53 compute-0 podman[82793]: 2026-01-21 23:24:53.979221259 +0000 UTC m=+0.143740144 container init 3e84084c0942fe5cb60236550a807b588d7ffe9387df9c433d36bf354868c57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 21 23:24:53 compute-0 podman[82793]: 2026-01-21 23:24:53.987516381 +0000 UTC m=+0.152035236 container start 3e84084c0942fe5cb60236550a807b588d7ffe9387df9c433d36bf354868c57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mclaren, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:24:53 compute-0 podman[82793]: 2026-01-21 23:24:53.99095366 +0000 UTC m=+0.155472535 container attach 3e84084c0942fe5cb60236550a807b588d7ffe9387df9c433d36bf354868c57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mclaren, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:24:54 compute-0 ceph-mgr[74614]: [progress INFO root] Writing back 2 completed events
Jan 21 23:24:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 21 23:24:54 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:54 compute-0 ceph-mon[74318]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:24:54 compute-0 eager_mclaren[82810]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:24:54 compute-0 eager_mclaren[82810]: --> relative data size: 1.0
Jan 21 23:24:54 compute-0 eager_mclaren[82810]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 23:24:54 compute-0 eager_mclaren[82810]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 4f45f4f4-edfc-474c-93fc-45d596171ed8
Jan 21 23:24:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "12ff17cb-cb33-4df9-9dc1-56dfadc7cbc7"} v 0) v1
Jan 21 23:24:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/893171720' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "12ff17cb-cb33-4df9-9dc1-56dfadc7cbc7"}]: dispatch
Jan 21 23:24:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 21 23:24:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 23:24:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/893171720' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "12ff17cb-cb33-4df9-9dc1-56dfadc7cbc7"}]': finished
Jan 21 23:24:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 21 23:24:55 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 21 23:24:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 21 23:24:55 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:24:55 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 23:24:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8"} v 0) v1
Jan 21 23:24:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3659202672' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8"}]: dispatch
Jan 21 23:24:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 21 23:24:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 23:24:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3659202672' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8"}]': finished
Jan 21 23:24:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 21 23:24:55 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 21 23:24:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 21 23:24:55 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:24:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 21 23:24:55 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:24:55 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 23:24:55 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 23:24:55 compute-0 eager_mclaren[82810]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 23:24:55 compute-0 lvm[82857]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 23:24:55 compute-0 lvm[82857]: VG ceph_vg0 finished
Jan 21 23:24:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:55 compute-0 eager_mclaren[82810]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 21 23:24:55 compute-0 eager_mclaren[82810]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 21 23:24:55 compute-0 eager_mclaren[82810]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 21 23:24:55 compute-0 eager_mclaren[82810]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 21 23:24:55 compute-0 eager_mclaren[82810]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 21 23:24:55 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/893171720' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "12ff17cb-cb33-4df9-9dc1-56dfadc7cbc7"}]: dispatch
Jan 21 23:24:55 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/893171720' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "12ff17cb-cb33-4df9-9dc1-56dfadc7cbc7"}]': finished
Jan 21 23:24:55 compute-0 ceph-mon[74318]: osdmap e4: 1 total, 0 up, 1 in
Jan 21 23:24:55 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:24:55 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3659202672' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8"}]: dispatch
Jan 21 23:24:55 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3659202672' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8"}]': finished
Jan 21 23:24:55 compute-0 ceph-mon[74318]: osdmap e5: 2 total, 0 up, 2 in
Jan 21 23:24:55 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:24:55 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:24:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 21 23:24:55 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3736353484' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 21 23:24:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 21 23:24:56 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1788839997' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 21 23:24:56 compute-0 eager_mclaren[82810]:  stderr: got monmap epoch 1
Jan 21 23:24:56 compute-0 eager_mclaren[82810]: --> Creating keyring file for osd.1
Jan 21 23:24:56 compute-0 eager_mclaren[82810]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 21 23:24:56 compute-0 eager_mclaren[82810]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 21 23:24:56 compute-0 eager_mclaren[82810]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 4f45f4f4-edfc-474c-93fc-45d596171ed8 --setuser ceph --setgroup ceph
Jan 21 23:24:56 compute-0 sudo[82952]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-canydsfypilswflyxhhczejrrawqnulx ; /usr/bin/python3'
Jan 21 23:24:56 compute-0 sudo[82952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:24:56 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 21 23:24:56 compute-0 python3[82955]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:24:56 compute-0 podman[82957]: 2026-01-21 23:24:56.565463178 +0000 UTC m=+0.042262973 container create 8e548f451f52208add605ee3fba113995ee0472689ab52134e32d4e3b3f9a885 (image=quay.io/ceph/ceph:v18, name=angry_mclean, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:24:56 compute-0 systemd[1]: Started libpod-conmon-8e548f451f52208add605ee3fba113995ee0472689ab52134e32d4e3b3f9a885.scope.
Jan 21 23:24:56 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f117f841c09c9865f15cd484a3f9ac71d246f4de2671f175380a3b480e4a377/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f117f841c09c9865f15cd484a3f9ac71d246f4de2671f175380a3b480e4a377/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f117f841c09c9865f15cd484a3f9ac71d246f4de2671f175380a3b480e4a377/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:56 compute-0 podman[82957]: 2026-01-21 23:24:56.550784625 +0000 UTC m=+0.027584440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:24:56 compute-0 podman[82957]: 2026-01-21 23:24:56.661456916 +0000 UTC m=+0.138256811 container init 8e548f451f52208add605ee3fba113995ee0472689ab52134e32d4e3b3f9a885 (image=quay.io/ceph/ceph:v18, name=angry_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:24:56 compute-0 podman[82957]: 2026-01-21 23:24:56.673181356 +0000 UTC m=+0.149981151 container start 8e548f451f52208add605ee3fba113995ee0472689ab52134e32d4e3b3f9a885 (image=quay.io/ceph/ceph:v18, name=angry_mclean, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:24:56 compute-0 podman[82957]: 2026-01-21 23:24:56.676464829 +0000 UTC m=+0.153264664 container attach 8e548f451f52208add605ee3fba113995ee0472689ab52134e32d4e3b3f9a885 (image=quay.io/ceph/ceph:v18, name=angry_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 21 23:24:56 compute-0 ceph-mon[74318]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:56 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3736353484' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 21 23:24:56 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1788839997' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 21 23:24:56 compute-0 ceph-mon[74318]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 21 23:24:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:24:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 21 23:24:57 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/829797782' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 21 23:24:57 compute-0 angry_mclean[82975]: 
Jan 21 23:24:57 compute-0 angry_mclean[82975]: {"fsid":"3759241a-7f1c-520d-ba17-879943ee2f00","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":125,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1769037895,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-21T23:24:41.149988+0000","services":{}},"progress_events":{}}
Jan 21 23:24:57 compute-0 systemd[1]: libpod-8e548f451f52208add605ee3fba113995ee0472689ab52134e32d4e3b3f9a885.scope: Deactivated successfully.
Jan 21 23:24:57 compute-0 podman[82957]: 2026-01-21 23:24:57.283326517 +0000 UTC m=+0.760126312 container died 8e548f451f52208add605ee3fba113995ee0472689ab52134e32d4e3b3f9a885 (image=quay.io/ceph/ceph:v18, name=angry_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:24:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f117f841c09c9865f15cd484a3f9ac71d246f4de2671f175380a3b480e4a377-merged.mount: Deactivated successfully.
Jan 21 23:24:57 compute-0 podman[82957]: 2026-01-21 23:24:57.343265497 +0000 UTC m=+0.820065292 container remove 8e548f451f52208add605ee3fba113995ee0472689ab52134e32d4e3b3f9a885 (image=quay.io/ceph/ceph:v18, name=angry_mclean, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 23:24:57 compute-0 systemd[1]: libpod-conmon-8e548f451f52208add605ee3fba113995ee0472689ab52134e32d4e3b3f9a885.scope: Deactivated successfully.
Jan 21 23:24:57 compute-0 sudo[82952]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:57 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/829797782' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 21 23:24:58 compute-0 eager_mclaren[82810]:  stderr: 2026-01-21T23:24:56.134+0000 7fc8f4742740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 21 23:24:58 compute-0 eager_mclaren[82810]:  stderr: 2026-01-21T23:24:56.134+0000 7fc8f4742740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 21 23:24:58 compute-0 eager_mclaren[82810]:  stderr: 2026-01-21T23:24:56.134+0000 7fc8f4742740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 21 23:24:58 compute-0 eager_mclaren[82810]:  stderr: 2026-01-21T23:24:56.134+0000 7fc8f4742740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 21 23:24:58 compute-0 eager_mclaren[82810]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 21 23:24:58 compute-0 eager_mclaren[82810]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 23:24:58 compute-0 eager_mclaren[82810]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 21 23:24:58 compute-0 eager_mclaren[82810]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 21 23:24:58 compute-0 eager_mclaren[82810]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 21 23:24:58 compute-0 eager_mclaren[82810]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 21 23:24:58 compute-0 eager_mclaren[82810]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 23:24:58 compute-0 eager_mclaren[82810]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 21 23:24:58 compute-0 ceph-mon[74318]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:58 compute-0 eager_mclaren[82810]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 21 23:24:58 compute-0 systemd[1]: libpod-3e84084c0942fe5cb60236550a807b588d7ffe9387df9c433d36bf354868c57f.scope: Deactivated successfully.
Jan 21 23:24:58 compute-0 systemd[1]: libpod-3e84084c0942fe5cb60236550a807b588d7ffe9387df9c433d36bf354868c57f.scope: Consumed 2.744s CPU time.
Jan 21 23:24:58 compute-0 podman[82793]: 2026-01-21 23:24:58.795451812 +0000 UTC m=+4.959970707 container died 3e84084c0942fe5cb60236550a807b588d7ffe9387df9c433d36bf354868c57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mclaren, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:24:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c33870cce38404518bc1b44bb1adc69457e30ee47bb830671e85e8fbd9054385-merged.mount: Deactivated successfully.
Jan 21 23:24:58 compute-0 podman[82793]: 2026-01-21 23:24:58.853516364 +0000 UTC m=+5.018035229 container remove 3e84084c0942fe5cb60236550a807b588d7ffe9387df9c433d36bf354868c57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mclaren, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:24:58 compute-0 systemd[1]: libpod-conmon-3e84084c0942fe5cb60236550a807b588d7ffe9387df9c433d36bf354868c57f.scope: Deactivated successfully.
Jan 21 23:24:58 compute-0 sudo[82690]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:58 compute-0 sudo[83861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:58 compute-0 sudo[83861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:58 compute-0 sudo[83861]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:59 compute-0 sudo[83886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:24:59 compute-0 sudo[83886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:59 compute-0 sudo[83886]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:59 compute-0 sudo[83911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:24:59 compute-0 sudo[83911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:59 compute-0 sudo[83911]: pam_unix(sudo:session): session closed for user root
Jan 21 23:24:59 compute-0 sudo[83936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:24:59 compute-0 sudo[83936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:24:59 compute-0 podman[84001]: 2026-01-21 23:24:59.474710924 +0000 UTC m=+0.049618416 container create 64582f76d7f778bd6336b69f8b6cf12632d3902738c27aed1ef803c84ed31306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 21 23:24:59 compute-0 systemd[1]: Started libpod-conmon-64582f76d7f778bd6336b69f8b6cf12632d3902738c27aed1ef803c84ed31306.scope.
Jan 21 23:24:59 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:24:59 compute-0 podman[84001]: 2026-01-21 23:24:59.453970679 +0000 UTC m=+0.028878161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:24:59 compute-0 podman[84001]: 2026-01-21 23:24:59.550891807 +0000 UTC m=+0.125799359 container init 64582f76d7f778bd6336b69f8b6cf12632d3902738c27aed1ef803c84ed31306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 21 23:24:59 compute-0 podman[84001]: 2026-01-21 23:24:59.558502326 +0000 UTC m=+0.133409838 container start 64582f76d7f778bd6336b69f8b6cf12632d3902738c27aed1ef803c84ed31306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:24:59 compute-0 podman[84001]: 2026-01-21 23:24:59.562349588 +0000 UTC m=+0.137257140 container attach 64582f76d7f778bd6336b69f8b6cf12632d3902738c27aed1ef803c84ed31306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 21 23:24:59 compute-0 pedantic_rhodes[84017]: 167 167
Jan 21 23:24:59 compute-0 systemd[1]: libpod-64582f76d7f778bd6336b69f8b6cf12632d3902738c27aed1ef803c84ed31306.scope: Deactivated successfully.
Jan 21 23:24:59 compute-0 conmon[84017]: conmon 64582f76d7f778bd6336 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-64582f76d7f778bd6336b69f8b6cf12632d3902738c27aed1ef803c84ed31306.scope/container/memory.events
Jan 21 23:24:59 compute-0 podman[84001]: 2026-01-21 23:24:59.565681632 +0000 UTC m=+0.140589154 container died 64582f76d7f778bd6336b69f8b6cf12632d3902738c27aed1ef803c84ed31306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 21 23:24:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:24:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1368d099c00166de778a5b1411fdcb52b4e3850038683b2c364a6ca45d9bd7e1-merged.mount: Deactivated successfully.
Jan 21 23:24:59 compute-0 podman[84001]: 2026-01-21 23:24:59.60462167 +0000 UTC m=+0.179529132 container remove 64582f76d7f778bd6336b69f8b6cf12632d3902738c27aed1ef803c84ed31306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:24:59 compute-0 systemd[1]: libpod-conmon-64582f76d7f778bd6336b69f8b6cf12632d3902738c27aed1ef803c84ed31306.scope: Deactivated successfully.
Jan 21 23:24:59 compute-0 podman[84040]: 2026-01-21 23:24:59.844794195 +0000 UTC m=+0.066846039 container create 9d2772d570b0100e88254f817b7d685fa6e64bc92bca319b9661fab7e4044a45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:24:59 compute-0 systemd[1]: Started libpod-conmon-9d2772d570b0100e88254f817b7d685fa6e64bc92bca319b9661fab7e4044a45.scope.
Jan 21 23:24:59 compute-0 podman[84040]: 2026-01-21 23:24:59.815829041 +0000 UTC m=+0.037880955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:24:59 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31dbdd301ff7a7b1450da51b08a1ec9e11dee47ba73577bef57ec5c1cad2c11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31dbdd301ff7a7b1450da51b08a1ec9e11dee47ba73577bef57ec5c1cad2c11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31dbdd301ff7a7b1450da51b08a1ec9e11dee47ba73577bef57ec5c1cad2c11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31dbdd301ff7a7b1450da51b08a1ec9e11dee47ba73577bef57ec5c1cad2c11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:24:59 compute-0 podman[84040]: 2026-01-21 23:24:59.931046875 +0000 UTC m=+0.153098679 container init 9d2772d570b0100e88254f817b7d685fa6e64bc92bca319b9661fab7e4044a45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pare, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 23:24:59 compute-0 podman[84040]: 2026-01-21 23:24:59.942231198 +0000 UTC m=+0.164283002 container start 9d2772d570b0100e88254f817b7d685fa6e64bc92bca319b9661fab7e4044a45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pare, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:24:59 compute-0 podman[84040]: 2026-01-21 23:24:59.945808731 +0000 UTC m=+0.167860545 container attach 9d2772d570b0100e88254f817b7d685fa6e64bc92bca319b9661fab7e4044a45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pare, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:25:00 compute-0 priceless_pare[84056]: {
Jan 21 23:25:00 compute-0 priceless_pare[84056]:     "1": [
Jan 21 23:25:00 compute-0 priceless_pare[84056]:         {
Jan 21 23:25:00 compute-0 priceless_pare[84056]:             "devices": [
Jan 21 23:25:00 compute-0 priceless_pare[84056]:                 "/dev/loop3"
Jan 21 23:25:00 compute-0 priceless_pare[84056]:             ],
Jan 21 23:25:00 compute-0 priceless_pare[84056]:             "lv_name": "ceph_lv0",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:             "lv_size": "7511998464",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:             "name": "ceph_lv0",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:             "tags": {
Jan 21 23:25:00 compute-0 priceless_pare[84056]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:                 "ceph.cluster_name": "ceph",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:                 "ceph.crush_device_class": "",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:                 "ceph.encrypted": "0",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:                 "ceph.osd_id": "1",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:                 "ceph.type": "block",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:                 "ceph.vdo": "0"
Jan 21 23:25:00 compute-0 priceless_pare[84056]:             },
Jan 21 23:25:00 compute-0 priceless_pare[84056]:             "type": "block",
Jan 21 23:25:00 compute-0 priceless_pare[84056]:             "vg_name": "ceph_vg0"
Jan 21 23:25:00 compute-0 priceless_pare[84056]:         }
Jan 21 23:25:00 compute-0 priceless_pare[84056]:     ]
Jan 21 23:25:00 compute-0 priceless_pare[84056]: }
Jan 21 23:25:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 21 23:25:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 21 23:25:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:25:00 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:25:00 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Jan 21 23:25:00 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Jan 21 23:25:00 compute-0 ceph-mon[74318]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:25:00 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 21 23:25:00 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:25:00 compute-0 systemd[1]: libpod-9d2772d570b0100e88254f817b7d685fa6e64bc92bca319b9661fab7e4044a45.scope: Deactivated successfully.
Jan 21 23:25:00 compute-0 podman[84040]: 2026-01-21 23:25:00.768940169 +0000 UTC m=+0.990991983 container died 9d2772d570b0100e88254f817b7d685fa6e64bc92bca319b9661fab7e4044a45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pare, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 21 23:25:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f31dbdd301ff7a7b1450da51b08a1ec9e11dee47ba73577bef57ec5c1cad2c11-merged.mount: Deactivated successfully.
Jan 21 23:25:00 compute-0 podman[84040]: 2026-01-21 23:25:00.823928243 +0000 UTC m=+1.045980047 container remove 9d2772d570b0100e88254f817b7d685fa6e64bc92bca319b9661fab7e4044a45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 21 23:25:00 compute-0 systemd[1]: libpod-conmon-9d2772d570b0100e88254f817b7d685fa6e64bc92bca319b9661fab7e4044a45.scope: Deactivated successfully.
Jan 21 23:25:00 compute-0 sudo[83936]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 21 23:25:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 21 23:25:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:25:00 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:25:00 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 21 23:25:00 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 21 23:25:00 compute-0 sudo[84077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:25:00 compute-0 sudo[84077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:00 compute-0 sudo[84077]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:01 compute-0 sudo[84102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:25:01 compute-0 sudo[84102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:01 compute-0 sudo[84102]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:01 compute-0 sudo[84127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:25:01 compute-0 sudo[84127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:01 compute-0 sudo[84127]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:01 compute-0 sudo[84152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:25:01 compute-0 sudo[84152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:25:01 compute-0 podman[84218]: 2026-01-21 23:25:01.585593872 +0000 UTC m=+0.037697050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:25:01 compute-0 podman[84218]: 2026-01-21 23:25:01.801266184 +0000 UTC m=+0.253369312 container create 8292b84cdab853e5b9ce68a9d58b5dd7249fd8b34e699094f21c5e868135d49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 23:25:01 compute-0 ceph-mon[74318]: Deploying daemon osd.0 on compute-1
Jan 21 23:25:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 21 23:25:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:25:01 compute-0 ceph-mon[74318]: Deploying daemon osd.1 on compute-0
Jan 21 23:25:01 compute-0 systemd[1]: Started libpod-conmon-8292b84cdab853e5b9ce68a9d58b5dd7249fd8b34e699094f21c5e868135d49b.scope.
Jan 21 23:25:01 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:25:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:25:02 compute-0 podman[84218]: 2026-01-21 23:25:02.183952873 +0000 UTC m=+0.636056061 container init 8292b84cdab853e5b9ce68a9d58b5dd7249fd8b34e699094f21c5e868135d49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:25:02 compute-0 podman[84218]: 2026-01-21 23:25:02.194102092 +0000 UTC m=+0.646205230 container start 8292b84cdab853e5b9ce68a9d58b5dd7249fd8b34e699094f21c5e868135d49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 23:25:02 compute-0 lucid_goldstine[84234]: 167 167
Jan 21 23:25:02 compute-0 systemd[1]: libpod-8292b84cdab853e5b9ce68a9d58b5dd7249fd8b34e699094f21c5e868135d49b.scope: Deactivated successfully.
Jan 21 23:25:02 compute-0 podman[84218]: 2026-01-21 23:25:02.392346395 +0000 UTC m=+0.844449533 container attach 8292b84cdab853e5b9ce68a9d58b5dd7249fd8b34e699094f21c5e868135d49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:25:02 compute-0 podman[84218]: 2026-01-21 23:25:02.39347091 +0000 UTC m=+0.845574058 container died 8292b84cdab853e5b9ce68a9d58b5dd7249fd8b34e699094f21c5e868135d49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 23:25:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b4173ae204cc9a95d3224b8339f6eb8be6ab61d7ffd4d3028bdcd64bbabdf99-merged.mount: Deactivated successfully.
Jan 21 23:25:02 compute-0 podman[84218]: 2026-01-21 23:25:02.729925861 +0000 UTC m=+1.182028989 container remove 8292b84cdab853e5b9ce68a9d58b5dd7249fd8b34e699094f21c5e868135d49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:25:02 compute-0 systemd[1]: libpod-conmon-8292b84cdab853e5b9ce68a9d58b5dd7249fd8b34e699094f21c5e868135d49b.scope: Deactivated successfully.
Jan 21 23:25:02 compute-0 ceph-mon[74318]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:25:03 compute-0 podman[84267]: 2026-01-21 23:25:03.064473131 +0000 UTC m=+0.030037699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:25:03 compute-0 podman[84267]: 2026-01-21 23:25:03.39417411 +0000 UTC m=+0.359738678 container create fc52ac5cf7d36d10d3fac23462bdec20c03bdac58b1c0a70e6296f7d22c798ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 21 23:25:03 compute-0 systemd[1]: Started libpod-conmon-fc52ac5cf7d36d10d3fac23462bdec20c03bdac58b1c0a70e6296f7d22c798ca.scope.
Jan 21 23:25:03 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fbea9e4536092a2319cbdcf3a1a59982661b525a41e70e62053099ebebc2e65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fbea9e4536092a2319cbdcf3a1a59982661b525a41e70e62053099ebebc2e65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fbea9e4536092a2319cbdcf3a1a59982661b525a41e70e62053099ebebc2e65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fbea9e4536092a2319cbdcf3a1a59982661b525a41e70e62053099ebebc2e65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fbea9e4536092a2319cbdcf3a1a59982661b525a41e70e62053099ebebc2e65/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:03 compute-0 podman[84267]: 2026-01-21 23:25:03.49186362 +0000 UTC m=+0.457428188 container init fc52ac5cf7d36d10d3fac23462bdec20c03bdac58b1c0a70e6296f7d22c798ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 23:25:03 compute-0 podman[84267]: 2026-01-21 23:25:03.506225553 +0000 UTC m=+0.471790161 container start fc52ac5cf7d36d10d3fac23462bdec20c03bdac58b1c0a70e6296f7d22c798ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:25:03 compute-0 podman[84267]: 2026-01-21 23:25:03.511450317 +0000 UTC m=+0.477014915 container attach fc52ac5cf7d36d10d3fac23462bdec20c03bdac58b1c0a70e6296f7d22c798ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 21 23:25:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:25:04 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate-test[84283]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Jan 21 23:25:04 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate-test[84283]:                             [--no-systemd] [--no-tmpfs]
Jan 21 23:25:04 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate-test[84283]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 21 23:25:04 compute-0 systemd[1]: libpod-fc52ac5cf7d36d10d3fac23462bdec20c03bdac58b1c0a70e6296f7d22c798ca.scope: Deactivated successfully.
Jan 21 23:25:04 compute-0 podman[84267]: 2026-01-21 23:25:04.247709156 +0000 UTC m=+1.213273764 container died fc52ac5cf7d36d10d3fac23462bdec20c03bdac58b1c0a70e6296f7d22c798ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 21 23:25:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fbea9e4536092a2319cbdcf3a1a59982661b525a41e70e62053099ebebc2e65-merged.mount: Deactivated successfully.
Jan 21 23:25:04 compute-0 podman[84267]: 2026-01-21 23:25:04.400712901 +0000 UTC m=+1.366277469 container remove fc52ac5cf7d36d10d3fac23462bdec20c03bdac58b1c0a70e6296f7d22c798ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate-test, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:25:04 compute-0 systemd[1]: libpod-conmon-fc52ac5cf7d36d10d3fac23462bdec20c03bdac58b1c0a70e6296f7d22c798ca.scope: Deactivated successfully.
Jan 21 23:25:04 compute-0 systemd[1]: Reloading.
Jan 21 23:25:04 compute-0 systemd-sysv-generator[84351]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:25:04 compute-0 systemd-rc-local-generator[84346]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:25:05 compute-0 ceph-mon[74318]: pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:25:05 compute-0 systemd[1]: Reloading.
Jan 21 23:25:05 compute-0 systemd-rc-local-generator[84388]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:25:05 compute-0 systemd-sysv-generator[84392]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:25:05 compute-0 systemd[1]: Starting Ceph osd.1 for 3759241a-7f1c-520d-ba17-879943ee2f00...
Jan 21 23:25:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:25:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:25:05 compute-0 podman[84447]: 2026-01-21 23:25:05.605153625 +0000 UTC m=+0.022280144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:25:05 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:25:05 compute-0 podman[84447]: 2026-01-21 23:25:05.733650466 +0000 UTC m=+0.150776955 container create 4e9d16bbb809a51ceaf5eb2d0210a83ca942e3511c2f3a358d09a3d458ef1cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 21 23:25:05 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:05 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f6d46bf598d0ff5dc256d6d10d6978208c5743f55873ec23f1c102446489be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f6d46bf598d0ff5dc256d6d10d6978208c5743f55873ec23f1c102446489be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f6d46bf598d0ff5dc256d6d10d6978208c5743f55873ec23f1c102446489be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f6d46bf598d0ff5dc256d6d10d6978208c5743f55873ec23f1c102446489be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f6d46bf598d0ff5dc256d6d10d6978208c5743f55873ec23f1c102446489be/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:05 compute-0 podman[84447]: 2026-01-21 23:25:05.966152399 +0000 UTC m=+0.383278938 container init 4e9d16bbb809a51ceaf5eb2d0210a83ca942e3511c2f3a358d09a3d458ef1cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 21 23:25:05 compute-0 podman[84447]: 2026-01-21 23:25:05.977042292 +0000 UTC m=+0.394168771 container start 4e9d16bbb809a51ceaf5eb2d0210a83ca942e3511c2f3a358d09a3d458ef1cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 23:25:05 compute-0 podman[84447]: 2026-01-21 23:25:05.98076091 +0000 UTC m=+0.397887419 container attach 4e9d16bbb809a51ceaf5eb2d0210a83ca942e3511c2f3a358d09a3d458ef1cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:25:06 compute-0 ceph-mon[74318]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:25:06 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:06 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:06 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate[84462]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 23:25:06 compute-0 bash[84447]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 23:25:06 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate[84462]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 21 23:25:06 compute-0 bash[84447]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 21 23:25:06 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate[84462]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 21 23:25:06 compute-0 bash[84447]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 21 23:25:06 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate[84462]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 21 23:25:06 compute-0 bash[84447]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 21 23:25:06 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate[84462]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 21 23:25:06 compute-0 bash[84447]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 21 23:25:06 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate[84462]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 23:25:06 compute-0 bash[84447]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 23:25:06 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate[84462]: --> ceph-volume raw activate successful for osd ID: 1
Jan 21 23:25:06 compute-0 bash[84447]: --> ceph-volume raw activate successful for osd ID: 1
Jan 21 23:25:07 compute-0 systemd[1]: libpod-4e9d16bbb809a51ceaf5eb2d0210a83ca942e3511c2f3a358d09a3d458ef1cf2.scope: Deactivated successfully.
Jan 21 23:25:07 compute-0 systemd[1]: libpod-4e9d16bbb809a51ceaf5eb2d0210a83ca942e3511c2f3a358d09a3d458ef1cf2.scope: Consumed 1.044s CPU time.
Jan 21 23:25:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Jan 21 23:25:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/98896974,v1:192.168.122.101:6801/98896974]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 21 23:25:07 compute-0 podman[84576]: 2026-01-21 23:25:07.06777878 +0000 UTC m=+0.042148110 container died 4e9d16bbb809a51ceaf5eb2d0210a83ca942e3511c2f3a358d09a3d458ef1cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 21 23:25:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8f6d46bf598d0ff5dc256d6d10d6978208c5743f55873ec23f1c102446489be-merged.mount: Deactivated successfully.
Jan 21 23:25:07 compute-0 podman[84576]: 2026-01-21 23:25:07.126687288 +0000 UTC m=+0.101056608 container remove 4e9d16bbb809a51ceaf5eb2d0210a83ca942e3511c2f3a358d09a3d458ef1cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1-activate, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:25:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:25:07 compute-0 podman[84636]: 2026-01-21 23:25:07.469954783 +0000 UTC m=+0.064932109 container create 2c6a03273f2087dd2d7d3ba87d43a2331c6c9c828ab696608f6d576bebec5eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 21 23:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba70343e60586cb646604cc1789fef926171833f92839b3fda91b1e4a3eb4e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba70343e60586cb646604cc1789fef926171833f92839b3fda91b1e4a3eb4e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba70343e60586cb646604cc1789fef926171833f92839b3fda91b1e4a3eb4e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba70343e60586cb646604cc1789fef926171833f92839b3fda91b1e4a3eb4e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba70343e60586cb646604cc1789fef926171833f92839b3fda91b1e4a3eb4e8/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:07 compute-0 podman[84636]: 2026-01-21 23:25:07.443147408 +0000 UTC m=+0.038124824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:25:07 compute-0 podman[84636]: 2026-01-21 23:25:07.541259191 +0000 UTC m=+0.136236557 container init 2c6a03273f2087dd2d7d3ba87d43a2331c6c9c828ab696608f6d576bebec5eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:25:07 compute-0 podman[84636]: 2026-01-21 23:25:07.559258279 +0000 UTC m=+0.154235615 container start 2c6a03273f2087dd2d7d3ba87d43a2331c6c9c828ab696608f6d576bebec5eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:25:07 compute-0 bash[84636]: 2c6a03273f2087dd2d7d3ba87d43a2331c6c9c828ab696608f6d576bebec5eb6
Jan 21 23:25:07 compute-0 systemd[1]: Started Ceph osd.1 for 3759241a-7f1c-520d-ba17-879943ee2f00.
Jan 21 23:25:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:25:07 compute-0 sudo[84152]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:07 compute-0 ceph-osd[84656]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 23:25:07 compute-0 ceph-osd[84656]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Jan 21 23:25:07 compute-0 ceph-osd[84656]: pidfile_write: ignore empty --pid-file
Jan 21 23:25:07 compute-0 ceph-osd[84656]: bdev(0x55889ae75800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 23:25:07 compute-0 ceph-osd[84656]: bdev(0x55889ae75800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 23:25:07 compute-0 ceph-osd[84656]: bdev(0x55889ae75800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 23:25:07 compute-0 ceph-osd[84656]: bdev(0x55889ae75800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 23:25:07 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 23:25:07 compute-0 ceph-osd[84656]: bdev(0x55889bcad800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 23:25:07 compute-0 ceph-osd[84656]: bdev(0x55889bcad800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 23:25:07 compute-0 ceph-osd[84656]: bdev(0x55889bcad800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 23:25:07 compute-0 ceph-osd[84656]: bdev(0x55889bcad800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 23:25:07 compute-0 ceph-osd[84656]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 21 23:25:07 compute-0 ceph-osd[84656]: bdev(0x55889bcad800 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 23:25:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:25:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:25:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 21 23:25:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 23:25:07 compute-0 ceph-mon[74318]: from='osd.0 [v2:192.168.122.101:6800/98896974,v1:192.168.122.101:6801/98896974]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 21 23:25:07 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:07 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:07 compute-0 sudo[84669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:25:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/98896974,v1:192.168.122.101:6801/98896974]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 21 23:25:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Jan 21 23:25:07 compute-0 sudo[84669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:07 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Jan 21 23:25:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]} v 0) v1
Jan 21 23:25:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/98896974,v1:192.168.122.101:6801/98896974]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 21 23:25:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0068 at location {host=compute-1,root=default}
Jan 21 23:25:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 21 23:25:07 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 21 23:25:07 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:07 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 23:25:07 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 23:25:07 compute-0 sudo[84669]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:07 compute-0 sudo[84694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:25:07 compute-0 sudo[84694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:07 compute-0 sudo[84694]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:07 compute-0 sudo[84719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:25:07 compute-0 sudo[84719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:07 compute-0 sudo[84719]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:07 compute-0 ceph-osd[84656]: bdev(0x55889ae75800 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 23:25:07 compute-0 sudo[84744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:25:07 compute-0 sudo[84744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:08 compute-0 ceph-osd[84656]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 21 23:25:08 compute-0 ceph-osd[84656]: load: jerasure load: lrc 
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2ec00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2ec00 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 23:25:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:25:08 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:25:08 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:08 compute-0 podman[84817]: 2026-01-21 23:25:08.420138648 +0000 UTC m=+0.064510645 container create 1be3b154a291992b68d74992e57223a20bfab24c6907ea07b9ad9d044edfd0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2ec00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2ec00 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 23:25:08 compute-0 systemd[1]: Started libpod-conmon-1be3b154a291992b68d74992e57223a20bfab24c6907ea07b9ad9d044edfd0ab.scope.
Jan 21 23:25:08 compute-0 podman[84817]: 2026-01-21 23:25:08.393188498 +0000 UTC m=+0.037560545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:25:08 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:25:08 compute-0 podman[84817]: 2026-01-21 23:25:08.530072264 +0000 UTC m=+0.174444251 container init 1be3b154a291992b68d74992e57223a20bfab24c6907ea07b9ad9d044edfd0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:25:08 compute-0 podman[84817]: 2026-01-21 23:25:08.537721656 +0000 UTC m=+0.182093643 container start 1be3b154a291992b68d74992e57223a20bfab24c6907ea07b9ad9d044edfd0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 21 23:25:08 compute-0 podman[84817]: 2026-01-21 23:25:08.541397911 +0000 UTC m=+0.185769888 container attach 1be3b154a291992b68d74992e57223a20bfab24c6907ea07b9ad9d044edfd0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 21 23:25:08 compute-0 youthful_hamilton[84838]: 167 167
Jan 21 23:25:08 compute-0 systemd[1]: libpod-1be3b154a291992b68d74992e57223a20bfab24c6907ea07b9ad9d044edfd0ab.scope: Deactivated successfully.
Jan 21 23:25:08 compute-0 podman[84817]: 2026-01-21 23:25:08.544279592 +0000 UTC m=+0.188651599 container died 1be3b154a291992b68d74992e57223a20bfab24c6907ea07b9ad9d044edfd0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:25:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f732387020d4b9f70fdbb274863450c675232bff445ddf1f1520d1854f338d59-merged.mount: Deactivated successfully.
Jan 21 23:25:08 compute-0 podman[84817]: 2026-01-21 23:25:08.593817555 +0000 UTC m=+0.238189562 container remove 1be3b154a291992b68d74992e57223a20bfab24c6907ea07b9ad9d044edfd0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:25:08 compute-0 systemd[1]: libpod-conmon-1be3b154a291992b68d74992e57223a20bfab24c6907ea07b9ad9d044edfd0ab.scope: Deactivated successfully.
Jan 21 23:25:08 compute-0 ceph-osd[84656]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 21 23:25:08 compute-0 ceph-osd[84656]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2ec00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2f400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2f400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2f400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2f400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluefs mount
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluefs mount shared_bdev_used = 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: RocksDB version: 7.9.2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Git sha 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: DB SUMMARY
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: DB Session ID:  8ZTXC2YB4KX1519G7U54
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: CURRENT file:  CURRENT
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                         Options.error_if_exists: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.create_if_missing: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                                     Options.env: 0x55889bcffc70
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                                Options.info_log: 0x55889aef2ba0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                              Options.statistics: (nil)
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                               Options.use_fsync: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                              Options.db_log_dir: 
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                                 Options.wal_dir: db.wal
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.write_buffer_manager: 0x55889be08460
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.unordered_write: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                               Options.row_cache: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                              Options.wal_filter: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.two_write_queues: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.wal_compression: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.atomic_flush: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.max_background_jobs: 4
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.max_background_compactions: -1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.max_subcompactions: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.max_open_files: -1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Compression algorithms supported:
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         kZSTD supported: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         kXpressCompression supported: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         kBZip2Compression supported: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         kLZ4Compression supported: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         kZlibCompression supported: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         kLZ4HCCompression supported: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         kSnappyCompression supported: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aef2600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee8dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 21 23:25:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aef2600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee8dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aef2600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee8dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aef2600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee8dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aef2600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee8dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aef2600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee8dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aef2600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee8dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:08 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/98896974,v1:192.168.122.101:6801/98896974]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 21 23:25:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aef25c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee8430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aef25c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee8430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:08 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aef25c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee8430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 21 23:25:08 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 21 23:25:08 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:08 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 23:25:08 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 23:25:08 compute-0 ceph-mon[74318]: pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:25:08 compute-0 ceph-mon[74318]: from='osd.0 [v2:192.168.122.101:6800/98896974,v1:192.168.122.101:6801/98896974]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 21 23:25:08 compute-0 ceph-mon[74318]: osdmap e6: 2 total, 0 up, 2 in
Jan 21 23:25:08 compute-0 ceph-mon[74318]: from='osd.0 [v2:192.168.122.101:6800/98896974,v1:192.168.122.101:6801/98896974]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 21 23:25:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1c275ec7-6035-41ba-90ac-216ce35a9a24
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769037908740845, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 21 23:25:08 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/98896974; not ready for session (expect reconnect)
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769037908741073, "job": 1, "event": "recovery_finished"}
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 21 23:25:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 21 23:25:08 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: freelist init
Jan 21 23:25:08 compute-0 ceph-osd[84656]: freelist _read_cfg
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluefs umount
Jan 21 23:25:08 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2f400 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 23:25:08 compute-0 podman[84876]: 2026-01-21 23:25:08.799758709 +0000 UTC m=+0.060273852 container create 977890ad5d36ead06c5339bc0e09cfdbd6e160536781a5de71954fc036d56294 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hermann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Jan 21 23:25:08 compute-0 systemd[1]: Started libpod-conmon-977890ad5d36ead06c5339bc0e09cfdbd6e160536781a5de71954fc036d56294.scope.
Jan 21 23:25:08 compute-0 podman[84876]: 2026-01-21 23:25:08.770227978 +0000 UTC m=+0.030743151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:25:08 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9debff5fc0aec6a25a54c3d254a23983847d370c50aa2afe71517bff775365/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9debff5fc0aec6a25a54c3d254a23983847d370c50aa2afe71517bff775365/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9debff5fc0aec6a25a54c3d254a23983847d370c50aa2afe71517bff775365/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9debff5fc0aec6a25a54c3d254a23983847d370c50aa2afe71517bff775365/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:08 compute-0 podman[84876]: 2026-01-21 23:25:08.912483814 +0000 UTC m=+0.172998927 container init 977890ad5d36ead06c5339bc0e09cfdbd6e160536781a5de71954fc036d56294 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hermann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:25:08 compute-0 podman[84876]: 2026-01-21 23:25:08.918281587 +0000 UTC m=+0.178796680 container start 977890ad5d36ead06c5339bc0e09cfdbd6e160536781a5de71954fc036d56294 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hermann, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 21 23:25:08 compute-0 podman[84876]: 2026-01-21 23:25:08.921398576 +0000 UTC m=+0.181913669 container attach 977890ad5d36ead06c5339bc0e09cfdbd6e160536781a5de71954fc036d56294 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2f400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2f400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2f400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bdev(0x55889bd2f400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluefs mount
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluefs mount shared_bdev_used = 4718592
Jan 21 23:25:08 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: RocksDB version: 7.9.2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Git sha 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: DB SUMMARY
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: DB Session ID:  8ZTXC2YB4KX1519G7U55
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: CURRENT file:  CURRENT
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                         Options.error_if_exists: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.create_if_missing: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                                     Options.env: 0x55889af34690
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                                Options.info_log: 0x55889aef38a0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                              Options.statistics: (nil)
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                               Options.use_fsync: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                              Options.db_log_dir: 
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                                 Options.wal_dir: db.wal
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                    Options.write_buffer_manager: 0x55889be08460
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.unordered_write: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                               Options.row_cache: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                              Options.wal_filter: None
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.two_write_queues: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.wal_compression: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.atomic_flush: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.max_background_jobs: 4
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.max_background_compactions: -1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.max_subcompactions: 1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.max_open_files: -1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb: Compression algorithms supported:
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         kZSTD supported: 0
Jan 21 23:25:08 compute-0 ceph-osd[84656]: rocksdb:         kXpressCompression supported: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         kBZip2Compression supported: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         kLZ4Compression supported: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         kZlibCompression supported: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         kLZ4HCCompression supported: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         kSnappyCompression supported: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aecfb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee9610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aecfb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee9610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aecfb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee9610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aecfb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee9610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aecfb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee9610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aecfb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee9610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aecfb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee9610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aef3e40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee9770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aef3e40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee9770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:           Options.merge_operator: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55889aef3e40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55889aee9770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.compression: LZ4
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.num_levels: 7
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.bloom_locality: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                               Options.ttl: 2592000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                       Options.enable_blob_files: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                           Options.min_blob_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1c275ec7-6035-41ba-90ac-216ce35a9a24
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769037909011749, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769037909016482, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037909, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1c275ec7-6035-41ba-90ac-216ce35a9a24", "db_session_id": "8ZTXC2YB4KX1519G7U55", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769037909018802, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 467, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037909, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1c275ec7-6035-41ba-90ac-216ce35a9a24", "db_session_id": "8ZTXC2YB4KX1519G7U55", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769037909021877, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037909, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1c275ec7-6035-41ba-90ac-216ce35a9a24", "db_session_id": "8ZTXC2YB4KX1519G7U55", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769037909023135, "job": 1, "event": "recovery_finished"}
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55889afa7c00
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: DB pointer 0x55889bdf1a00
Jan 21 23:25:09 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 21 23:25:09 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 21 23:25:09 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 23:25:09 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 23:25:09 compute-0 ceph-osd[84656]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 21 23:25:09 compute-0 ceph-osd[84656]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 21 23:25:09 compute-0 ceph-osd[84656]: _get_class not permitted to load lua
Jan 21 23:25:09 compute-0 ceph-osd[84656]: _get_class not permitted to load sdk
Jan 21 23:25:09 compute-0 ceph-osd[84656]: _get_class not permitted to load test_remote_reads
Jan 21 23:25:09 compute-0 ceph-osd[84656]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 21 23:25:09 compute-0 ceph-osd[84656]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 21 23:25:09 compute-0 ceph-osd[84656]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 21 23:25:09 compute-0 ceph-osd[84656]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 21 23:25:09 compute-0 ceph-osd[84656]: osd.1 0 load_pgs
Jan 21 23:25:09 compute-0 ceph-osd[84656]: osd.1 0 load_pgs opened 0 pgs
Jan 21 23:25:09 compute-0 ceph-osd[84656]: osd.1 0 log_to_monitors true
Jan 21 23:25:09 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1[84652]: 2026-01-21T23:25:09.052+0000 7fe187931740 -1 osd.1 0 log_to_monitors true
Jan 21 23:25:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Jan 21 23:25:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3586179740,v1:192.168.122.100:6803/3586179740]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 21 23:25:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:25:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:25:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:25:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:25:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:25:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:25:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:25:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 21 23:25:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 23:25:09 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/98896974; not ready for session (expect reconnect)
Jan 21 23:25:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 21 23:25:09 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:09 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 23:25:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3586179740,v1:192.168.122.100:6803/3586179740]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 21 23:25:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Jan 21 23:25:09 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Jan 21 23:25:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]} v 0) v1
Jan 21 23:25:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3586179740,v1:192.168.122.100:6803/3586179740]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 21 23:25:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e8 create-or-move crush item name 'osd.1' initial_weight 0.0068 at location {host=compute-0,root=default}
Jan 21 23:25:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 21 23:25:09 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 21 23:25:09 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:09 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 23:25:09 compute-0 ceph-mon[74318]: from='osd.0 [v2:192.168.122.101:6800/98896974,v1:192.168.122.101:6801/98896974]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 21 23:25:09 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 23:25:09 compute-0 ceph-mon[74318]: osdmap e7: 2 total, 0 up, 2 in
Jan 21 23:25:09 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:09 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:09 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:09 compute-0 ceph-mon[74318]: from='osd.1 [v2:192.168.122.100:6802/3586179740,v1:192.168.122.100:6803/3586179740]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 21 23:25:09 compute-0 romantic_hermann[85073]: {
Jan 21 23:25:09 compute-0 romantic_hermann[85073]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:25:09 compute-0 romantic_hermann[85073]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:25:09 compute-0 romantic_hermann[85073]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:25:09 compute-0 romantic_hermann[85073]:         "osd_id": 1,
Jan 21 23:25:09 compute-0 romantic_hermann[85073]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:25:09 compute-0 romantic_hermann[85073]:         "type": "bluestore"
Jan 21 23:25:09 compute-0 romantic_hermann[85073]:     }
Jan 21 23:25:09 compute-0 romantic_hermann[85073]: }
Jan 21 23:25:09 compute-0 systemd[1]: libpod-977890ad5d36ead06c5339bc0e09cfdbd6e160536781a5de71954fc036d56294.scope: Deactivated successfully.
Jan 21 23:25:09 compute-0 podman[84876]: 2026-01-21 23:25:09.814719177 +0000 UTC m=+1.075234330 container died 977890ad5d36ead06c5339bc0e09cfdbd6e160536781a5de71954fc036d56294 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 21 23:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d9debff5fc0aec6a25a54c3d254a23983847d370c50aa2afe71517bff775365-merged.mount: Deactivated successfully.
Jan 21 23:25:09 compute-0 podman[84876]: 2026-01-21 23:25:09.877795486 +0000 UTC m=+1.138310579 container remove 977890ad5d36ead06c5339bc0e09cfdbd6e160536781a5de71954fc036d56294 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hermann, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:25:09 compute-0 systemd[1]: libpod-conmon-977890ad5d36ead06c5339bc0e09cfdbd6e160536781a5de71954fc036d56294.scope: Deactivated successfully.
Jan 21 23:25:09 compute-0 sudo[84744]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:25:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:25:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:10 compute-0 sudo[85323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:25:10 compute-0 sudo[85323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:10 compute-0 sudo[85323]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:10 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 21 23:25:10 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 21 23:25:10 compute-0 sudo[85348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:25:10 compute-0 sudo[85348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:10 compute-0 sudo[85348]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:10 compute-0 sudo[85373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:25:10 compute-0 sudo[85373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:10 compute-0 sudo[85373]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:10 compute-0 sudo[85398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:25:10 compute-0 sudo[85398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:10 compute-0 sudo[85398]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:10 compute-0 sudo[85423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:25:10 compute-0 sudo[85423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:10 compute-0 sudo[85423]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:10 compute-0 sudo[85448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 21 23:25:10 compute-0 sudo[85448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:10 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/98896974; not ready for session (expect reconnect)
Jan 21 23:25:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 21 23:25:10 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:10 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 23:25:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 21 23:25:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 23:25:10 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3586179740,v1:192.168.122.100:6803/3586179740]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 21 23:25:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e9 e9: 2 total, 0 up, 2 in
Jan 21 23:25:10 compute-0 ceph-osd[84656]: osd.1 0 done with init, starting boot process
Jan 21 23:25:10 compute-0 ceph-osd[84656]: osd.1 0 start_boot
Jan 21 23:25:10 compute-0 ceph-osd[84656]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 21 23:25:10 compute-0 ceph-osd[84656]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 21 23:25:10 compute-0 ceph-osd[84656]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 21 23:25:10 compute-0 ceph-osd[84656]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 21 23:25:10 compute-0 ceph-osd[84656]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 21 23:25:10 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 0 up, 2 in
Jan 21 23:25:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 21 23:25:10 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 21 23:25:10 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 23:25:10 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:10 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 23:25:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 21 23:25:10 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3586179740; not ready for session (expect reconnect)
Jan 21 23:25:10 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:10 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 23:25:10 compute-0 ceph-mon[74318]: purged_snaps scrub starts
Jan 21 23:25:10 compute-0 ceph-mon[74318]: purged_snaps scrub ok
Jan 21 23:25:10 compute-0 ceph-mon[74318]: pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:25:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:10 compute-0 ceph-mon[74318]: from='osd.1 [v2:192.168.122.100:6802/3586179740,v1:192.168.122.100:6803/3586179740]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 21 23:25:10 compute-0 ceph-mon[74318]: osdmap e8: 2 total, 0 up, 2 in
Jan 21 23:25:10 compute-0 ceph-mon[74318]: from='osd.1 [v2:192.168.122.100:6802/3586179740,v1:192.168.122.100:6803/3586179740]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 21 23:25:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:10 compute-0 ceph-mon[74318]: from='osd.1 [v2:192.168.122.100:6802/3586179740,v1:192.168.122.100:6803/3586179740]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 21 23:25:10 compute-0 ceph-mon[74318]: osdmap e9: 2 total, 0 up, 2 in
Jan 21 23:25:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:25:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:11 compute-0 podman[85544]: 2026-01-21 23:25:11.237680621 +0000 UTC m=+0.059792716 container exec 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:25:11 compute-0 podman[85544]: 2026-01-21 23:25:11.325250933 +0000 UTC m=+0.147363048 container exec_died 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:25:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:25:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:25:11 compute-0 sudo[85448]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:25:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:25:11 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/98896974; not ready for session (expect reconnect)
Jan 21 23:25:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 21 23:25:11 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:11 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 23:25:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:11 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3586179740; not ready for session (expect reconnect)
Jan 21 23:25:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 21 23:25:11 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:11 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 23:25:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:25:11 compute-0 sudo[85630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:25:11 compute-0 sudo[85630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:11 compute-0 sudo[85630]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:25:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:11 compute-0 sudo[85655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:25:11 compute-0 sudo[85655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:11 compute-0 sudo[85655]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:11 compute-0 sudo[85681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:25:11 compute-0 sudo[85681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:11 compute-0 sudo[85681]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:12 compute-0 sudo[85706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:25:12 compute-0 sudo[85706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:25:12 compute-0 sudo[85706]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:12 compute-0 sudo[85761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:25:12 compute-0 sudo[85761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:12 compute-0 sudo[85761]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:12 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3586179740; not ready for session (expect reconnect)
Jan 21 23:25:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 21 23:25:12 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:12 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 23:25:12 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/98896974; not ready for session (expect reconnect)
Jan 21 23:25:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 21 23:25:12 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:12 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 23:25:12 compute-0 sudo[85786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:25:12 compute-0 sudo[85786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:12 compute-0 sudo[85786]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:12 compute-0 sudo[85811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:25:12 compute-0 sudo[85811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:12 compute-0 sudo[85811]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:25:12 compute-0 sudo[85836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- inventory --format=json-pretty --filter-for-batch
Jan 21 23:25:12 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:12 compute-0 sudo[85836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:25:13 compute-0 ceph-mon[74318]: purged_snaps scrub starts
Jan 21 23:25:13 compute-0 ceph-mon[74318]: purged_snaps scrub ok
Jan 21 23:25:13 compute-0 ceph-mon[74318]: pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:25:13 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:13 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:13 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:13 compute-0 podman[85902]: 2026-01-21 23:25:13.33334293 +0000 UTC m=+0.050469693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:25:13 compute-0 podman[85902]: 2026-01-21 23:25:13.430964599 +0000 UTC m=+0.148091262 container create c4965dd4f199d35f33b60241d3ec07632e286c238f27c55de2bec678092c1838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 23:25:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:25:13 compute-0 systemd[1]: Started libpod-conmon-c4965dd4f199d35f33b60241d3ec07632e286c238f27c55de2bec678092c1838.scope.
Jan 21 23:25:13 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:25:13 compute-0 podman[85902]: 2026-01-21 23:25:13.740610203 +0000 UTC m=+0.457736866 container init c4965dd4f199d35f33b60241d3ec07632e286c238f27c55de2bec678092c1838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:25:13 compute-0 podman[85902]: 2026-01-21 23:25:13.751951201 +0000 UTC m=+0.469077884 container start c4965dd4f199d35f33b60241d3ec07632e286c238f27c55de2bec678092c1838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:25:13 compute-0 compassionate_cori[85918]: 167 167
Jan 21 23:25:13 compute-0 systemd[1]: libpod-c4965dd4f199d35f33b60241d3ec07632e286c238f27c55de2bec678092c1838.scope: Deactivated successfully.
Jan 21 23:25:13 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/98896974; not ready for session (expect reconnect)
Jan 21 23:25:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 21 23:25:13 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:13 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 23:25:13 compute-0 podman[85902]: 2026-01-21 23:25:13.766030205 +0000 UTC m=+0.483156878 container attach c4965dd4f199d35f33b60241d3ec07632e286c238f27c55de2bec678092c1838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 21 23:25:13 compute-0 podman[85902]: 2026-01-21 23:25:13.76651555 +0000 UTC m=+0.483642213 container died c4965dd4f199d35f33b60241d3ec07632e286c238f27c55de2bec678092c1838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cori, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 21 23:25:13 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3586179740; not ready for session (expect reconnect)
Jan 21 23:25:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 21 23:25:13 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:13 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 23:25:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb84220ff1c0fc84ac48c2b04ecdfd405b84b7c2114b9a8164fe22f702aebc4e-merged.mount: Deactivated successfully.
Jan 21 23:25:14 compute-0 podman[85902]: 2026-01-21 23:25:14.060909164 +0000 UTC m=+0.778035857 container remove c4965dd4f199d35f33b60241d3ec07632e286c238f27c55de2bec678092c1838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cori, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:25:14 compute-0 systemd[1]: libpod-conmon-c4965dd4f199d35f33b60241d3ec07632e286c238f27c55de2bec678092c1838.scope: Deactivated successfully.
Jan 21 23:25:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 21 23:25:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 23:25:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Jan 21 23:25:14 compute-0 ceph-mon[74318]: OSD bench result of 6435.575725 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 23:25:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:14 compute-0 podman[85942]: 2026-01-21 23:25:14.299031014 +0000 UTC m=+0.071865399 container create 3cc8292eb7410ab1121b06cb57d65e331ed4e9f5534334be146a99b228b7500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chebyshev, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 23:25:14 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/98896974,v1:192.168.122.101:6801/98896974] boot
Jan 21 23:25:14 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Jan 21 23:25:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 21 23:25:14 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 21 23:25:14 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 23:25:14 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:14 compute-0 podman[85942]: 2026-01-21 23:25:14.258242387 +0000 UTC m=+0.031076832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:25:14 compute-0 systemd[1]: Started libpod-conmon-3cc8292eb7410ab1121b06cb57d65e331ed4e9f5534334be146a99b228b7500c.scope.
Jan 21 23:25:14 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8884ed671bbb88a7984ef6061796213aa4644849a17687ec096d90c493840e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8884ed671bbb88a7984ef6061796213aa4644849a17687ec096d90c493840e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8884ed671bbb88a7984ef6061796213aa4644849a17687ec096d90c493840e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8884ed671bbb88a7984ef6061796213aa4644849a17687ec096d90c493840e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:14 compute-0 podman[85942]: 2026-01-21 23:25:14.510377308 +0000 UTC m=+0.283211683 container init 3cc8292eb7410ab1121b06cb57d65e331ed4e9f5534334be146a99b228b7500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 21 23:25:14 compute-0 podman[85942]: 2026-01-21 23:25:14.517143862 +0000 UTC m=+0.289978217 container start 3cc8292eb7410ab1121b06cb57d65e331ed4e9f5534334be146a99b228b7500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:25:14 compute-0 podman[85942]: 2026-01-21 23:25:14.624944091 +0000 UTC m=+0.397778546 container attach 3cc8292eb7410ab1121b06cb57d65e331ed4e9f5534334be146a99b228b7500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chebyshev, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:25:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 21 23:25:14 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:14 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3586179740; not ready for session (expect reconnect)
Jan 21 23:25:14 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 23:25:15 compute-0 ceph-mgr[74614]: [devicehealth INFO root] creating mgr pool
Jan 21 23:25:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Jan 21 23:25:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 21 23:25:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:25:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 21 23:25:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 23:25:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:25:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 21 23:25:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e11 e11: 2 total, 1 up, 2 in
Jan 21 23:25:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Jan 21 23:25:15 compute-0 ceph-mon[74318]: pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 23:25:15 compute-0 ceph-mon[74318]: osd.0 [v2:192.168.122.101:6800/98896974,v1:192.168.122.101:6801/98896974] boot
Jan 21 23:25:15 compute-0 ceph-mon[74318]: osdmap e10: 2 total, 1 up, 2 in
Jan 21 23:25:15 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 21 23:25:15 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:15 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:15 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 21 23:25:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 21 23:25:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 21 23:25:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 21 23:25:15 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 1 up, 2 in
Jan 21 23:25:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 21 23:25:15 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:15 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 23:25:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Jan 21 23:25:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 21 23:25:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:25:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:25:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Jan 21 23:25:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 21 23:25:15 compute-0 ceph-mgr[74614]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Jan 21 23:25:15 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Jan 21 23:25:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 21 23:25:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]: [
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:     {
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:         "available": false,
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:         "ceph_device": false,
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:         "lsm_data": {},
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:         "lvs": [],
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:         "path": "/dev/sr0",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:         "rejected_reasons": [
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "Insufficient space (<5GB)",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "Has a FileSystem"
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:         ],
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:         "sys_api": {
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "actuators": null,
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "device_nodes": "sr0",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "devname": "sr0",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "human_readable_size": "482.00 KB",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "id_bus": "ata",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "model": "QEMU DVD-ROM",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "nr_requests": "2",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "parent": "/dev/sr0",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "partitions": {},
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "path": "/dev/sr0",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "removable": "1",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "rev": "2.5+",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "ro": "0",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "rotational": "1",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "sas_address": "",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "sas_device_handle": "",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "scheduler_mode": "mq-deadline",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "sectors": 0,
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "sectorsize": "2048",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "size": 493568.0,
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "support_discard": "2048",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "type": "disk",
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:             "vendor": "QEMU"
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:         }
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]:     }
Jan 21 23:25:15 compute-0 pedantic_chebyshev[85958]: ]
Jan 21 23:25:15 compute-0 systemd[1]: libpod-3cc8292eb7410ab1121b06cb57d65e331ed4e9f5534334be146a99b228b7500c.scope: Deactivated successfully.
Jan 21 23:25:15 compute-0 systemd[1]: libpod-3cc8292eb7410ab1121b06cb57d65e331ed4e9f5534334be146a99b228b7500c.scope: Consumed 1.217s CPU time.
Jan 21 23:25:15 compute-0 conmon[85958]: conmon 3cc8292eb7410ab1121b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3cc8292eb7410ab1121b06cb57d65e331ed4e9f5534334be146a99b228b7500c.scope/container/memory.events
Jan 21 23:25:15 compute-0 podman[85942]: 2026-01-21 23:25:15.723779144 +0000 UTC m=+1.496613499 container died 3cc8292eb7410ab1121b06cb57d65e331ed4e9f5534334be146a99b228b7500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 21 23:25:15 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3586179740; not ready for session (expect reconnect)
Jan 21 23:25:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 21 23:25:15 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:15 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 23:25:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc8884ed671bbb88a7984ef6061796213aa4644849a17687ec096d90c493840e-merged.mount: Deactivated successfully.
Jan 21 23:25:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 21 23:25:16 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 23:25:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 21 23:25:16 compute-0 ceph-mon[74318]: osdmap e11: 2 total, 1 up, 2 in
Jan 21 23:25:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 21 23:25:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 21 23:25:16 compute-0 ceph-mon[74318]: Adjusting osd_memory_target on compute-1 to  5247M
Jan 21 23:25:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:16 compute-0 ceph-mon[74318]: pgmap v52: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 21 23:25:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:16 compute-0 podman[85942]: 2026-01-21 23:25:16.609680271 +0000 UTC m=+2.382514666 container remove 3cc8292eb7410ab1121b06cb57d65e331ed4e9f5534334be146a99b228b7500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:25:16 compute-0 sudo[85836]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:25:16 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 21 23:25:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e12 e12: 2 total, 1 up, 2 in
Jan 21 23:25:16 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 1 up, 2 in
Jan 21 23:25:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 21 23:25:16 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:16 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 23:25:16 compute-0 systemd[1]: libpod-conmon-3cc8292eb7410ab1121b06cb57d65e331ed4e9f5534334be146a99b228b7500c.scope: Deactivated successfully.
Jan 21 23:25:16 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:25:16 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3586179740; not ready for session (expect reconnect)
Jan 21 23:25:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 21 23:25:16 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:16 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 23:25:16 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:25:16 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:25:16 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Jan 21 23:25:16 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 21 23:25:16 compute-0 ceph-mgr[74614]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Jan 21 23:25:16 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Jan 21 23:25:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 21 23:25:16 compute-0 ceph-mgr[74614]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 21 23:25:16 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 21 23:25:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:25:17 compute-0 ceph-mon[74318]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 23:25:17 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 21 23:25:17 compute-0 ceph-mon[74318]: osdmap e12: 2 total, 1 up, 2 in
Jan 21 23:25:17 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:17 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:17 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:17 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:17 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:17 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:17 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 21 23:25:17 compute-0 ceph-mon[74318]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 21 23:25:17 compute-0 ceph-mon[74318]: Unable to set osd_memory_target on compute-0 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 21 23:25:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 21 23:25:17 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3586179740; not ready for session (expect reconnect)
Jan 21 23:25:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 21 23:25:17 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:17 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 23:25:18 compute-0 ceph-osd[84656]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 10.187 iops: 2607.880 elapsed_sec: 1.150
Jan 21 23:25:18 compute-0 ceph-osd[84656]: log_channel(cluster) log [WRN] : OSD bench result of 2607.880277 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 23:25:18 compute-0 ceph-osd[84656]: osd.1 0 waiting for initial osdmap
Jan 21 23:25:18 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1[84652]: 2026-01-21T23:25:18.379+0000 7fe1838b1640 -1 osd.1 0 waiting for initial osdmap
Jan 21 23:25:18 compute-0 ceph-osd[84656]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 21 23:25:18 compute-0 ceph-osd[84656]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 21 23:25:18 compute-0 ceph-osd[84656]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 21 23:25:18 compute-0 ceph-osd[84656]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef
Jan 21 23:25:18 compute-0 ceph-osd[84656]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 21 23:25:18 compute-0 ceph-osd[84656]: osd.1 12 set_numa_affinity not setting numa affinity
Jan 21 23:25:18 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-osd-1[84652]: 2026-01-21T23:25:18.418+0000 7fe17eed9640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 21 23:25:18 compute-0 ceph-osd[84656]: osd.1 12 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Jan 21 23:25:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 21 23:25:18 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 21 23:25:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Jan 21 23:25:18 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/3586179740,v1:192.168.122.100:6803/3586179740] boot
Jan 21 23:25:18 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Jan 21 23:25:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 21 23:25:18 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:18 compute-0 ceph-mon[74318]: pgmap v54: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 21 23:25:18 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:18 compute-0 ceph-osd[84656]: osd.1 13 state: booting -> active
Jan 21 23:25:18 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 21 23:25:19 compute-0 ceph-mon[74318]: OSD bench result of 2607.880277 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 23:25:19 compute-0 ceph-mon[74318]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 21 23:25:19 compute-0 ceph-mon[74318]: osd.1 [v2:192.168.122.100:6802/3586179740,v1:192.168.122.100:6803/3586179740] boot
Jan 21 23:25:19 compute-0 ceph-mon[74318]: osdmap e13: 2 total, 2 up, 2 in
Jan 21 23:25:19 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 21 23:25:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Jan 21 23:25:19 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Jan 21 23:25:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 creating+peering; 0 B data, 852 MiB used, 13 GiB / 14 GiB avail
Jan 21 23:25:19 compute-0 ceph-mgr[74614]: [devicehealth INFO root] creating main.db for devicehealth
Jan 21 23:25:19 compute-0 ceph-mgr[74614]: [devicehealth INFO root] Check health
Jan 21 23:25:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 21 23:25:19 compute-0 sudo[87148]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Jan 21 23:25:19 compute-0 sudo[87148]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 21 23:25:19 compute-0 sudo[87148]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Jan 21 23:25:19 compute-0 sudo[87148]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 21 23:25:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 21 23:25:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 21 23:25:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 21 23:25:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Jan 21 23:25:20 compute-0 ceph-mon[74318]: osdmap e14: 2 total, 2 up, 2 in
Jan 21 23:25:20 compute-0 ceph-mon[74318]: pgmap v57: 1 pgs: 1 creating+peering; 0 B data, 852 MiB used, 13 GiB / 14 GiB avail
Jan 21 23:25:20 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 21 23:25:20 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 21 23:25:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 21 23:25:20 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Jan 21 23:25:21 compute-0 ceph-mon[74318]: osdmap e15: 2 total, 2 up, 2 in
Jan 21 23:25:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 21 23:25:21 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.boqcsl(active, since 102s)
Jan 21 23:25:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:25:22 compute-0 ceph-mon[74318]: pgmap v59: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 21 23:25:22 compute-0 ceph-mon[74318]: mgrmap e8: compute-0.boqcsl(active, since 102s)
Jan 21 23:25:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 21 23:25:24 compute-0 ceph-mon[74318]: pgmap v60: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 21 23:25:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:25:27 compute-0 ceph-mon[74318]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:28 compute-0 sudo[87174]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkrnjunnhsygtykdltknxfharxhakbrf ; /usr/bin/python3'
Jan 21 23:25:28 compute-0 sudo[87174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:25:28 compute-0 python3[87176]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:25:28 compute-0 ceph-mon[74318]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:28 compute-0 podman[87178]: 2026-01-21 23:25:28.395550336 +0000 UTC m=+0.071172987 container create fd65d337db31e32f86f1e31e05ae790a54f74ba346b43bf3635ae89e8221cf45 (image=quay.io/ceph/ceph:v18, name=recursing_nash, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 23:25:28 compute-0 systemd[1]: Started libpod-conmon-fd65d337db31e32f86f1e31e05ae790a54f74ba346b43bf3635ae89e8221cf45.scope.
Jan 21 23:25:28 compute-0 podman[87178]: 2026-01-21 23:25:28.370666829 +0000 UTC m=+0.046289510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:25:28 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa79361b8f3de8f892c7699b681bc5a2b44c7e564bc48ad31c0b67b2a421a1e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa79361b8f3de8f892c7699b681bc5a2b44c7e564bc48ad31c0b67b2a421a1e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa79361b8f3de8f892c7699b681bc5a2b44c7e564bc48ad31c0b67b2a421a1e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:28 compute-0 podman[87178]: 2026-01-21 23:25:28.509068785 +0000 UTC m=+0.184691446 container init fd65d337db31e32f86f1e31e05ae790a54f74ba346b43bf3635ae89e8221cf45 (image=quay.io/ceph/ceph:v18, name=recursing_nash, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 21 23:25:28 compute-0 podman[87178]: 2026-01-21 23:25:28.515451987 +0000 UTC m=+0.191074628 container start fd65d337db31e32f86f1e31e05ae790a54f74ba346b43bf3635ae89e8221cf45 (image=quay.io/ceph/ceph:v18, name=recursing_nash, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:25:28 compute-0 podman[87178]: 2026-01-21 23:25:28.534666263 +0000 UTC m=+0.210288904 container attach fd65d337db31e32f86f1e31e05ae790a54f74ba346b43bf3635ae89e8221cf45 (image=quay.io/ceph/ceph:v18, name=recursing_nash, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 21 23:25:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 21 23:25:29 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3317892759' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 21 23:25:29 compute-0 recursing_nash[87194]: 
Jan 21 23:25:29 compute-0 recursing_nash[87194]: {"fsid":"3759241a-7f1c-520d-ba17-879943ee2f00","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":156,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":15,"num_osds":2,"num_up_osds":2,"osd_up_since":1769037918,"num_in_osds":2,"osd_in_since":1769037895,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":475242496,"bytes_avail":14548754432,"bytes_total":15023996928},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-21T23:24:41.149988+0000","services":{}},"progress_events":{}}
Jan 21 23:25:29 compute-0 systemd[1]: libpod-fd65d337db31e32f86f1e31e05ae790a54f74ba346b43bf3635ae89e8221cf45.scope: Deactivated successfully.
Jan 21 23:25:29 compute-0 conmon[87194]: conmon fd65d337db31e32f86f1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fd65d337db31e32f86f1e31e05ae790a54f74ba346b43bf3635ae89e8221cf45.scope/container/memory.events
Jan 21 23:25:29 compute-0 podman[87178]: 2026-01-21 23:25:29.141175741 +0000 UTC m=+0.816798372 container died fd65d337db31e32f86f1e31e05ae790a54f74ba346b43bf3635ae89e8221cf45 (image=quay.io/ceph/ceph:v18, name=recursing_nash, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 23:25:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-baa79361b8f3de8f892c7699b681bc5a2b44c7e564bc48ad31c0b67b2a421a1e-merged.mount: Deactivated successfully.
Jan 21 23:25:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3317892759' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 21 23:25:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:29 compute-0 podman[87178]: 2026-01-21 23:25:29.736812013 +0000 UTC m=+1.412434674 container remove fd65d337db31e32f86f1e31e05ae790a54f74ba346b43bf3635ae89e8221cf45 (image=quay.io/ceph/ceph:v18, name=recursing_nash, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 21 23:25:29 compute-0 systemd[1]: libpod-conmon-fd65d337db31e32f86f1e31e05ae790a54f74ba346b43bf3635ae89e8221cf45.scope: Deactivated successfully.
Jan 21 23:25:29 compute-0 sudo[87174]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:30 compute-0 sudo[87256]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epmcearturqaxpbpgrgcunwooevlurhr ; /usr/bin/python3'
Jan 21 23:25:30 compute-0 sudo[87256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:25:30 compute-0 python3[87258]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:25:30 compute-0 podman[87259]: 2026-01-21 23:25:30.272482335 +0000 UTC m=+0.061410674 container create 4fa688ff20b61f7889f3c9c775d014154d658788bb88760cb9d5e2f5f4383d5a (image=quay.io/ceph/ceph:v18, name=lucid_gates, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:25:30 compute-0 podman[87259]: 2026-01-21 23:25:30.235286258 +0000 UTC m=+0.024214577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:25:30 compute-0 systemd[1]: Started libpod-conmon-4fa688ff20b61f7889f3c9c775d014154d658788bb88760cb9d5e2f5f4383d5a.scope.
Jan 21 23:25:30 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:25:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73859758f89f4e1b843eacff7db208cf09eb95ff76a2743794310eebdd698cac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73859758f89f4e1b843eacff7db208cf09eb95ff76a2743794310eebdd698cac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:30 compute-0 podman[87259]: 2026-01-21 23:25:30.435046336 +0000 UTC m=+0.223974665 container init 4fa688ff20b61f7889f3c9c775d014154d658788bb88760cb9d5e2f5f4383d5a (image=quay.io/ceph/ceph:v18, name=lucid_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:25:30 compute-0 podman[87259]: 2026-01-21 23:25:30.441775027 +0000 UTC m=+0.230703326 container start 4fa688ff20b61f7889f3c9c775d014154d658788bb88760cb9d5e2f5f4383d5a (image=quay.io/ceph/ceph:v18, name=lucid_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 21 23:25:30 compute-0 podman[87259]: 2026-01-21 23:25:30.529402398 +0000 UTC m=+0.318330747 container attach 4fa688ff20b61f7889f3c9c775d014154d658788bb88760cb9d5e2f5f4383d5a (image=quay.io/ceph/ceph:v18, name=lucid_gates, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 23:25:30 compute-0 ceph-mon[74318]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 21 23:25:30 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2245852911' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 23:25:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:25:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:25:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:25:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:25:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 21 23:25:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 23:25:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:25:31 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:25:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:25:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:25:31 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 21 23:25:31 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 21 23:25:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 21 23:25:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2245852911' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 23:25:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 23:25:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:25:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:25:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2245852911' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 23:25:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Jan 21 23:25:31 compute-0 lucid_gates[87275]: pool 'vms' created
Jan 21 23:25:31 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Jan 21 23:25:31 compute-0 systemd[1]: libpod-4fa688ff20b61f7889f3c9c775d014154d658788bb88760cb9d5e2f5f4383d5a.scope: Deactivated successfully.
Jan 21 23:25:31 compute-0 podman[87259]: 2026-01-21 23:25:31.775724975 +0000 UTC m=+1.564653334 container died 4fa688ff20b61f7889f3c9c775d014154d658788bb88760cb9d5e2f5f4383d5a (image=quay.io/ceph/ceph:v18, name=lucid_gates, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:25:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-73859758f89f4e1b843eacff7db208cf09eb95ff76a2743794310eebdd698cac-merged.mount: Deactivated successfully.
Jan 21 23:25:32 compute-0 podman[87259]: 2026-01-21 23:25:32.102054271 +0000 UTC m=+1.890982570 container remove 4fa688ff20b61f7889f3c9c775d014154d658788bb88760cb9d5e2f5f4383d5a (image=quay.io/ceph/ceph:v18, name=lucid_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 23:25:32 compute-0 sudo[87256]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:25:32 compute-0 systemd[1]: libpod-conmon-4fa688ff20b61f7889f3c9c775d014154d658788bb88760cb9d5e2f5f4383d5a.scope: Deactivated successfully.
Jan 21 23:25:32 compute-0 sudo[87337]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcenhfbofpzdzesakhriensdsxzxxcqm ; /usr/bin/python3'
Jan 21 23:25:32 compute-0 sudo[87337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:25:32 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 16 pg[2.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:32 compute-0 python3[87339]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:25:32 compute-0 podman[87340]: 2026-01-21 23:25:32.597856206 +0000 UTC m=+0.094572090 container create e5c9d511340e9f682e5d1515ac26abc2f8b3f8a4c38c1242ebe50401b225df55 (image=quay.io/ceph/ceph:v18, name=friendly_chebyshev, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 21 23:25:32 compute-0 podman[87340]: 2026-01-21 23:25:32.544606757 +0000 UTC m=+0.041322661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:25:32 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:25:32 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:25:32 compute-0 systemd[1]: Started libpod-conmon-e5c9d511340e9f682e5d1515ac26abc2f8b3f8a4c38c1242ebe50401b225df55.scope.
Jan 21 23:25:32 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490a6960bcc97d793126edd6669538e6dc073ee4b3d718169ebee319630c2289/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490a6960bcc97d793126edd6669538e6dc073ee4b3d718169ebee319630c2289/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 21 23:25:32 compute-0 podman[87340]: 2026-01-21 23:25:32.93908972 +0000 UTC m=+0.435805604 container init e5c9d511340e9f682e5d1515ac26abc2f8b3f8a4c38c1242ebe50401b225df55 (image=quay.io/ceph/ceph:v18, name=friendly_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 23:25:32 compute-0 podman[87340]: 2026-01-21 23:25:32.944934166 +0000 UTC m=+0.441650050 container start e5c9d511340e9f682e5d1515ac26abc2f8b3f8a4c38c1242ebe50401b225df55 (image=quay.io/ceph/ceph:v18, name=friendly_chebyshev, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:25:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Jan 21 23:25:33 compute-0 podman[87340]: 2026-01-21 23:25:33.187915721 +0000 UTC m=+0.684631605 container attach e5c9d511340e9f682e5d1515ac26abc2f8b3f8a4c38c1242ebe50401b225df55 (image=quay.io/ceph/ceph:v18, name=friendly_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 23:25:33 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Jan 21 23:25:33 compute-0 ceph-mon[74318]: Updating compute-2:/etc/ceph/ceph.conf
Jan 21 23:25:33 compute-0 ceph-mon[74318]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:33 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2245852911' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 23:25:33 compute-0 ceph-mon[74318]: osdmap e16: 2 total, 2 up, 2 in
Jan 21 23:25:33 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 17 pg[2.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 21 23:25:33 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2476619428' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 23:25:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v67: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:33 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 21 23:25:33 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 21 23:25:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 21 23:25:34 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 23:25:34 compute-0 ceph-mon[74318]: Updating compute-2:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:25:34 compute-0 ceph-mon[74318]: osdmap e17: 2 total, 2 up, 2 in
Jan 21 23:25:34 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2476619428' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 23:25:34 compute-0 ceph-mon[74318]: pgmap v67: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:34 compute-0 ceph-mon[74318]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 21 23:25:34 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2476619428' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 23:25:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Jan 21 23:25:34 compute-0 friendly_chebyshev[87355]: pool 'volumes' created
Jan 21 23:25:34 compute-0 systemd[1]: libpod-e5c9d511340e9f682e5d1515ac26abc2f8b3f8a4c38c1242ebe50401b225df55.scope: Deactivated successfully.
Jan 21 23:25:34 compute-0 podman[87340]: 2026-01-21 23:25:34.798258386 +0000 UTC m=+2.294974330 container died e5c9d511340e9f682e5d1515ac26abc2f8b3f8a4c38c1242ebe50401b225df55 (image=quay.io/ceph/ceph:v18, name=friendly_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:25:34 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Jan 21 23:25:34 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.client.admin.keyring
Jan 21 23:25:34 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.client.admin.keyring
Jan 21 23:25:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-490a6960bcc97d793126edd6669538e6dc073ee4b3d718169ebee319630c2289-merged.mount: Deactivated successfully.
Jan 21 23:25:35 compute-0 podman[87340]: 2026-01-21 23:25:35.336384312 +0000 UTC m=+2.833100196 container remove e5c9d511340e9f682e5d1515ac26abc2f8b3f8a4c38c1242ebe50401b225df55 (image=quay.io/ceph/ceph:v18, name=friendly_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 23:25:35 compute-0 sudo[87337]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:35 compute-0 systemd[1]: libpod-conmon-e5c9d511340e9f682e5d1515ac26abc2f8b3f8a4c38c1242ebe50401b225df55.scope: Deactivated successfully.
Jan 21 23:25:35 compute-0 sudo[87418]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoderiridastuiponiythpjibssyneei ; /usr/bin/python3'
Jan 21 23:25:35 compute-0 sudo[87418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:25:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v69: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:35 compute-0 python3[87420]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:25:35 compute-0 ceph-mon[74318]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 23:25:35 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2476619428' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 23:25:35 compute-0 ceph-mon[74318]: osdmap e18: 2 total, 2 up, 2 in
Jan 21 23:25:35 compute-0 ceph-mon[74318]: Updating compute-2:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.client.admin.keyring
Jan 21 23:25:35 compute-0 podman[87421]: 2026-01-21 23:25:35.717413311 +0000 UTC m=+0.087577610 container create 5d4edd69b2610bbb5a5d2931dd875835f36bf2fafb9987938ec694a013d5ad4d (image=quay.io/ceph/ceph:v18, name=kind_thompson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Jan 21 23:25:35 compute-0 podman[87421]: 2026-01-21 23:25:35.653979977 +0000 UTC m=+0.024144306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:25:35 compute-0 systemd[1]: Started libpod-conmon-5d4edd69b2610bbb5a5d2931dd875835f36bf2fafb9987938ec694a013d5ad4d.scope.
Jan 21 23:25:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 21 23:25:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Jan 21 23:25:35 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f83371352f53fcbd72cc379a6b5e5cc3130e55f082b55c062c2b52a8c546585/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f83371352f53fcbd72cc379a6b5e5cc3130e55f082b55c062c2b52a8c546585/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:35 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Jan 21 23:25:35 compute-0 podman[87421]: 2026-01-21 23:25:35.901767235 +0000 UTC m=+0.271931544 container init 5d4edd69b2610bbb5a5d2931dd875835f36bf2fafb9987938ec694a013d5ad4d (image=quay.io/ceph/ceph:v18, name=kind_thompson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 21 23:25:35 compute-0 podman[87421]: 2026-01-21 23:25:35.907821907 +0000 UTC m=+0.277986196 container start 5d4edd69b2610bbb5a5d2931dd875835f36bf2fafb9987938ec694a013d5ad4d (image=quay.io/ceph/ceph:v18, name=kind_thompson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:25:35 compute-0 podman[87421]: 2026-01-21 23:25:35.916738305 +0000 UTC m=+0.286902614 container attach 5d4edd69b2610bbb5a5d2931dd875835f36bf2fafb9987938ec694a013d5ad4d (image=quay.io/ceph/ceph:v18, name=kind_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 21 23:25:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:25:36 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:25:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 21 23:25:36 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3256402681' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 23:25:36 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:25:36 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v71: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:36 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev 8d92c0a3-bd28-4012-baa1-fa2ed99d41e8 (Updating mon deployment (+2 -> 3))
Jan 21 23:25:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 21 23:25:36 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 23:25:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 21 23:25:36 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 21 23:25:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:25:36 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:25:36 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Jan 21 23:25:36 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Jan 21 23:25:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 21 23:25:36 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3256402681' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 23:25:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Jan 21 23:25:36 compute-0 kind_thompson[87437]: pool 'backups' created
Jan 21 23:25:36 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Jan 21 23:25:36 compute-0 systemd[1]: libpod-5d4edd69b2610bbb5a5d2931dd875835f36bf2fafb9987938ec694a013d5ad4d.scope: Deactivated successfully.
Jan 21 23:25:36 compute-0 podman[87421]: 2026-01-21 23:25:36.84027736 +0000 UTC m=+1.210441689 container died 5d4edd69b2610bbb5a5d2931dd875835f36bf2fafb9987938ec694a013d5ad4d (image=quay.io/ceph/ceph:v18, name=kind_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:25:36 compute-0 ceph-mon[74318]: pgmap v69: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:36 compute-0 ceph-mon[74318]: osdmap e19: 2 total, 2 up, 2 in
Jan 21 23:25:36 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:36 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3256402681' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 23:25:36 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:36 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:36 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 23:25:36 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 21 23:25:36 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:25:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f83371352f53fcbd72cc379a6b5e5cc3130e55f082b55c062c2b52a8c546585-merged.mount: Deactivated successfully.
Jan 21 23:25:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:25:37 compute-0 podman[87421]: 2026-01-21 23:25:37.230882826 +0000 UTC m=+1.601047125 container remove 5d4edd69b2610bbb5a5d2931dd875835f36bf2fafb9987938ec694a013d5ad4d (image=quay.io/ceph/ceph:v18, name=kind_thompson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:25:37 compute-0 systemd[1]: libpod-conmon-5d4edd69b2610bbb5a5d2931dd875835f36bf2fafb9987938ec694a013d5ad4d.scope: Deactivated successfully.
Jan 21 23:25:37 compute-0 sudo[87418]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:37 compute-0 sudo[87498]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vashaudodohyedgmbmuudfwrrtvlzume ; /usr/bin/python3'
Jan 21 23:25:37 compute-0 sudo[87498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:25:37 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 21 23:25:37 compute-0 python3[87500]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:25:37 compute-0 podman[87501]: 2026-01-21 23:25:37.646348059 +0000 UTC m=+0.087357184 container create ac4035639055631b370baf32d33637420f632a9a97495bae593df33951b9874e (image=quay.io/ceph/ceph:v18, name=zealous_mendel, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 23:25:37 compute-0 podman[87501]: 2026-01-21 23:25:37.579332427 +0000 UTC m=+0.020341572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:25:37 compute-0 systemd[1]: Started libpod-conmon-ac4035639055631b370baf32d33637420f632a9a97495bae593df33951b9874e.scope.
Jan 21 23:25:37 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/553a200bb8522d08618824101c8aac6e15477c283364e076b62e3450b72809ae/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/553a200bb8522d08618824101c8aac6e15477c283364e076b62e3450b72809ae/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 21 23:25:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Jan 21 23:25:37 compute-0 podman[87501]: 2026-01-21 23:25:37.869714104 +0000 UTC m=+0.310723299 container init ac4035639055631b370baf32d33637420f632a9a97495bae593df33951b9874e (image=quay.io/ceph/ceph:v18, name=zealous_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:25:37 compute-0 podman[87501]: 2026-01-21 23:25:37.876897701 +0000 UTC m=+0.317906816 container start ac4035639055631b370baf32d33637420f632a9a97495bae593df33951b9874e (image=quay.io/ceph/ceph:v18, name=zealous_mendel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:25:37 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Jan 21 23:25:37 compute-0 podman[87501]: 2026-01-21 23:25:37.904636063 +0000 UTC m=+0.345645208 container attach ac4035639055631b370baf32d33637420f632a9a97495bae593df33951b9874e (image=quay.io/ceph/ceph:v18, name=zealous_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 21 23:25:37 compute-0 ceph-mon[74318]: pgmap v71: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:37 compute-0 ceph-mon[74318]: Deploying daemon mon.compute-2 on compute-2
Jan 21 23:25:37 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3256402681' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 23:25:37 compute-0 ceph-mon[74318]: osdmap e20: 2 total, 2 up, 2 in
Jan 21 23:25:37 compute-0 ceph-mon[74318]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 21 23:25:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 21 23:25:38 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1322334641' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 23:25:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v74: 4 pgs: 2 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:25:39
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Some PGs (0.500000) are unknown; try again later
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 23:25:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Jan 21 23:25:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:25:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:25:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1322334641' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 23:25:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Jan 21 23:25:39 compute-0 zealous_mendel[87517]: pool 'images' created
Jan 21 23:25:39 compute-0 systemd[1]: libpod-ac4035639055631b370baf32d33637420f632a9a97495bae593df33951b9874e.scope: Deactivated successfully.
Jan 21 23:25:39 compute-0 podman[87501]: 2026-01-21 23:25:39.33018067 +0000 UTC m=+1.771189825 container died ac4035639055631b370baf32d33637420f632a9a97495bae593df33951b9874e (image=quay.io/ceph/ceph:v18, name=zealous_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:25:39 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Jan 21 23:25:39 compute-0 ceph-mon[74318]: osdmap e21: 2 total, 2 up, 2 in
Jan 21 23:25:39 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1322334641' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 23:25:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-553a200bb8522d08618824101c8aac6e15477c283364e076b62e3450b72809ae-merged.mount: Deactivated successfully.
Jan 21 23:25:39 compute-0 podman[87501]: 2026-01-21 23:25:39.616936639 +0000 UTC m=+2.057945764 container remove ac4035639055631b370baf32d33637420f632a9a97495bae593df33951b9874e (image=quay.io/ceph/ceph:v18, name=zealous_mendel, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:25:39 compute-0 sudo[87498]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:39 compute-0 systemd[1]: libpod-conmon-ac4035639055631b370baf32d33637420f632a9a97495bae593df33951b9874e.scope: Deactivated successfully.
Jan 21 23:25:39 compute-0 sudo[87579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqeuljcxfwhfwtaedpyfcquljwxyfvux ; /usr/bin/python3'
Jan 21 23:25:39 compute-0 sudo[87579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:25:39 compute-0 python3[87581]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:25:40 compute-0 podman[87582]: 2026-01-21 23:25:39.927330757 +0000 UTC m=+0.028034103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:25:40 compute-0 podman[87582]: 2026-01-21 23:25:40.108323521 +0000 UTC m=+0.209026797 container create d2e6c474cd2f976b7e9d8e70bf73b74d2d97de06c77274bff82ee5a89b0c7908 (image=quay.io/ceph/ceph:v18, name=amazing_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 21 23:25:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 21 23:25:40 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:25:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Jan 21 23:25:40 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Jan 21 23:25:40 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev 3a46b99a-5210-4346-8619-285efe5dbd77 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 21 23:25:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Jan 21 23:25:40 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:25:40 compute-0 systemd[1]: Started libpod-conmon-d2e6c474cd2f976b7e9d8e70bf73b74d2d97de06c77274bff82ee5a89b0c7908.scope.
Jan 21 23:25:40 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2a276a601bd0c4608c7f0ad8f381bfafe809431c11ec24ed166f16db70f54a7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2a276a601bd0c4608c7f0ad8f381bfafe809431c11ec24ed166f16db70f54a7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:40 compute-0 podman[87582]: 2026-01-21 23:25:40.41474308 +0000 UTC m=+0.515446356 container init d2e6c474cd2f976b7e9d8e70bf73b74d2d97de06c77274bff82ee5a89b0c7908 (image=quay.io/ceph/ceph:v18, name=amazing_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:25:40 compute-0 podman[87582]: 2026-01-21 23:25:40.419621846 +0000 UTC m=+0.520325122 container start d2e6c474cd2f976b7e9d8e70bf73b74d2d97de06c77274bff82ee5a89b0c7908 (image=quay.io/ceph/ceph:v18, name=amazing_kare, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:25:40 compute-0 podman[87582]: 2026-01-21 23:25:40.422548994 +0000 UTC m=+0.523252270 container attach d2e6c474cd2f976b7e9d8e70bf73b74d2d97de06c77274bff82ee5a89b0c7908 (image=quay.io/ceph/ceph:v18, name=amazing_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:25:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v77: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 21 23:25:40 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:40 compute-0 ceph-mon[74318]: pgmap v74: 4 pgs: 2 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:25:40 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1322334641' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 23:25:40 compute-0 ceph-mon[74318]: osdmap e22: 2 total, 2 up, 2 in
Jan 21 23:25:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:25:40 compute-0 ceph-mon[74318]: osdmap e23: 2 total, 2 up, 2 in
Jan 21 23:25:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:25:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 21 23:25:40 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2970035950' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 21 23:25:41 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 23:25:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:25:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:25:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2970035950' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Jan 21 23:25:41 compute-0 amazing_kare[87597]: pool 'cephfs.cephfs.meta' created
Jan 21 23:25:41 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Jan 21 23:25:41 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev fc2ff299-5cc2-4708-b159-33ed85f28318 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Jan 21 23:25:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:25:41 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 24 pg[2.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=24 pruub=15.935340881s) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active pruub 48.267284393s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:25:41 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 24 pg[2.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=24 pruub=15.935340881s) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown pruub 48.267284393s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:41 compute-0 systemd[1]: libpod-d2e6c474cd2f976b7e9d8e70bf73b74d2d97de06c77274bff82ee5a89b0c7908.scope: Deactivated successfully.
Jan 21 23:25:41 compute-0 podman[87582]: 2026-01-21 23:25:41.390657689 +0000 UTC m=+1.491360995 container died d2e6c474cd2f976b7e9d8e70bf73b74d2d97de06c77274bff82ee5a89b0c7908 (image=quay.io/ceph/ceph:v18, name=amazing_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 23:25:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2a276a601bd0c4608c7f0ad8f381bfafe809431c11ec24ed166f16db70f54a7-merged.mount: Deactivated successfully.
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:25:41 compute-0 podman[87582]: 2026-01-21 23:25:41.524055763 +0000 UTC m=+1.624759029 container remove d2e6c474cd2f976b7e9d8e70bf73b74d2d97de06c77274bff82ee5a89b0c7908 (image=quay.io/ceph/ceph:v18, name=amazing_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 21 23:25:41 compute-0 systemd[1]: libpod-conmon-d2e6c474cd2f976b7e9d8e70bf73b74d2d97de06c77274bff82ee5a89b0c7908.scope: Deactivated successfully.
Jan 21 23:25:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 21 23:25:41 compute-0 sudo[87579]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 21 23:25:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 21 23:25:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 21 23:25:41 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:25:41 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:25:41 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Jan 21 23:25:41 compute-0 ceph-mon[74318]: pgmap v77: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:41 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2970035950' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 23:25:41 compute-0 ceph-mon[74318]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 23:25:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:25:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:25:41 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2970035950' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 23:25:41 compute-0 ceph-mon[74318]: osdmap e24: 2 total, 2 up, 2 in
Jan 21 23:25:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:25:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:41 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 21 23:25:41 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1713679334; not ready for session (expect reconnect)
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 21 23:25:41 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 23:25:41 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Jan 21 23:25:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 21 23:25:41 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 21 23:25:41 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 23:25:41 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 21 23:25:41 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 21 23:25:41 compute-0 ceph-mon[74318]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Jan 21 23:25:41 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 23:25:41 compute-0 sudo[87660]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krpeybyvobjocuxhhyssuwgjggrjjdca ; /usr/bin/python3'
Jan 21 23:25:41 compute-0 sudo[87660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:25:41 compute-0 python3[87662]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:25:41 compute-0 podman[87663]: 2026-01-21 23:25:41.93364247 +0000 UTC m=+0.073112066 container create 388b66bf5fcd792634d50038f8ed938fb31605fc61fe24ed296473e6bc410878 (image=quay.io/ceph/ceph:v18, name=youthful_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:25:41 compute-0 systemd[1]: Started libpod-conmon-388b66bf5fcd792634d50038f8ed938fb31605fc61fe24ed296473e6bc410878.scope.
Jan 21 23:25:41 compute-0 podman[87663]: 2026-01-21 23:25:41.903734791 +0000 UTC m=+0.043204377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:25:42 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2e6721d4a27be53a3621c16fac2995c682d6446481b3d9337560f976e8743e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2e6721d4a27be53a3621c16fac2995c682d6446481b3d9337560f976e8743e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:42 compute-0 podman[87663]: 2026-01-21 23:25:42.037325343 +0000 UTC m=+0.176795009 container init 388b66bf5fcd792634d50038f8ed938fb31605fc61fe24ed296473e6bc410878 (image=quay.io/ceph/ceph:v18, name=youthful_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:25:42 compute-0 podman[87663]: 2026-01-21 23:25:42.050143757 +0000 UTC m=+0.189613363 container start 388b66bf5fcd792634d50038f8ed938fb31605fc61fe24ed296473e6bc410878 (image=quay.io/ceph/ceph:v18, name=youthful_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:25:42 compute-0 podman[87663]: 2026-01-21 23:25:42.054302552 +0000 UTC m=+0.193772168 container attach 388b66bf5fcd792634d50038f8ed938fb31605fc61fe24ed296473e6bc410878 (image=quay.io/ceph/ceph:v18, name=youthful_ardinghelli, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 21 23:25:42 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 21 23:25:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v79: 37 pgs: 32 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:42 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 21 23:25:42 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:42 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 21 23:25:42 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1713679334; not ready for session (expect reconnect)
Jan 21 23:25:42 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 21 23:25:42 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 23:25:42 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 21 23:25:42 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 21 23:25:43 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1713679334; not ready for session (expect reconnect)
Jan 21 23:25:43 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 21 23:25:43 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 23:25:43 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 21 23:25:43 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 21 23:25:44 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:25:44 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 21 23:25:44 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 21 23:25:44 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 21 23:25:44 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1558337394; not ready for session (expect reconnect)
Jan 21 23:25:44 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 21 23:25:44 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:44 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 21 23:25:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v80: 37 pgs: 1 peering, 32 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:44 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 21 23:25:44 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:44 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1713679334; not ready for session (expect reconnect)
Jan 21 23:25:44 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 21 23:25:44 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 23:25:44 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 21 23:25:45 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1558337394; not ready for session (expect reconnect)
Jan 21 23:25:45 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 21 23:25:45 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:45 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 21 23:25:45 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 21 23:25:45 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 21 23:25:45 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1713679334; not ready for session (expect reconnect)
Jan 21 23:25:45 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 21 23:25:45 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 23:25:45 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 21 23:25:45 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 21 23:25:46 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1558337394; not ready for session (expect reconnect)
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 21 23:25:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v81: 37 pgs: 1 creating+peering, 1 peering, 31 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1713679334; not ready for session (expect reconnect)
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 21 23:25:46 compute-0 ceph-mon[74318]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : fsmap 
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.boqcsl(active, since 2m)
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] :     application not enabled on pool 'vms'
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] :     application not enabled on pool 'volumes'
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:46 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev 8d92c0a3-bd28-4012-baa1-fa2ed99d41e8 (Updating mon deployment (+2 -> 3))
Jan 21 23:25:46 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event 8d92c0a3-bd28-4012-baa1-fa2ed99d41e8 (Updating mon deployment (+2 -> 3)) in 10 seconds
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:46 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev f4931e02-289f-4789-99b7-8f05c3cc6189 (Updating mgr deployment (+2 -> 3))
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.uvjsro", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.uvjsro", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.uvjsro", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.uvjsro on compute-2
Jan 21 23:25:46 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.uvjsro on compute-2
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Jan 21 23:25:46 compute-0 ceph-mon[74318]: Deploying daemon mon.compute-1 on compute-1
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-0 calling monitor election
Jan 21 23:25:46 compute-0 ceph-mon[74318]: pgmap v79: 37 pgs: 32 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-2 calling monitor election
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mon[74318]: pgmap v80: 37 pgs: 1 peering, 32 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 21 23:25:46 compute-0 ceph-mon[74318]: monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 21 23:25:46 compute-0 ceph-mon[74318]: fsmap 
Jan 21 23:25:46 compute-0 ceph-mon[74318]: osdmap e24: 2 total, 2 up, 2 in
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mgrmap e8: compute-0.boqcsl(active, since 2m)
Jan 21 23:25:46 compute-0 ceph-mon[74318]: Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled
Jan 21 23:25:46 compute-0 ceph-mon[74318]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Jan 21 23:25:46 compute-0 ceph-mon[74318]:     application not enabled on pool 'vms'
Jan 21 23:25:46 compute-0 ceph-mon[74318]:     application not enabled on pool 'volumes'
Jan 21 23:25:46 compute-0 ceph-mon[74318]:     application not enabled on pool 'backups'
Jan 21 23:25:46 compute-0 ceph-mon[74318]:     application not enabled on pool 'images'
Jan 21 23:25:46 compute-0 ceph-mon[74318]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.uvjsro", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Jan 21 23:25:46 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev 1ef9cdf7-ef41-4546-8b3c-fdc21b05ca0d (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 21 23:25:46 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev 3a46b99a-5210-4346-8619-285efe5dbd77 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 21 23:25:46 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event 3a46b99a-5210-4346-8619-285efe5dbd77 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 6 seconds
Jan 21 23:25:46 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev fc2ff299-5cc2-4708-b159-33ed85f28318 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 21 23:25:46 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event fc2ff299-5cc2-4708-b159-33ed85f28318 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 5 seconds
Jan 21 23:25:46 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev 1ef9cdf7-ef41-4546-8b3c-fdc21b05ca0d (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 21 23:25:46 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event 1ef9cdf7-ef41-4546-8b3c-fdc21b05ca0d (PG autoscaler increasing pool 4 PGs from 1 to 32) in 0 seconds
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.1e( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.1d( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.1f( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.1c( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.9( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.a( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.b( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.8( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.7( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.6( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.4( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.2( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.5( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.1( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.3( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.c( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.d( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.e( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.f( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.10( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.11( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.12( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.14( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.13( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.15( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.16( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.17( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.18( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.19( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.1b( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.1a( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.1f( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.1d( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.1c( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.1e( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.8( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.7( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.a( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.2( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.5( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.4( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.b( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.1( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.3( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.c( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.6( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.9( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.f( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.e( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.0( empty local-lis/les=24/25 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.10( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.11( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.17( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.14( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.d( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.12( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.18( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.16( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.1b( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.19( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.15( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.13( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 25 pg[2.1a( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [1] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 21 23:25:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1870498075' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 23:25:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:25:47 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.1 deep-scrub starts
Jan 21 23:25:47 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.1 deep-scrub ok
Jan 21 23:25:47 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1558337394; not ready for session (expect reconnect)
Jan 21 23:25:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 21 23:25:47 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:47 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 21 23:25:47 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1713679334; not ready for session (expect reconnect)
Jan 21 23:25:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 21 23:25:47 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 23:25:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 21 23:25:47 compute-0 ceph-mon[74318]: pgmap v81: 37 pgs: 1 creating+peering, 1 peering, 31 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:47 compute-0 ceph-mon[74318]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 23:25:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.uvjsro", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 21 23:25:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 21 23:25:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:25:47 compute-0 ceph-mon[74318]: Deploying daemon mgr.compute-2.uvjsro on compute-2
Jan 21 23:25:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:25:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:25:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:25:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:25:47 compute-0 ceph-mon[74318]: osdmap e25: 2 total, 2 up, 2 in
Jan 21 23:25:47 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1870498075' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 21 23:25:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 23:25:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1870498075' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 23:25:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Jan 21 23:25:47 compute-0 youthful_ardinghelli[87679]: pool 'cephfs.cephfs.data' created
Jan 21 23:25:47 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Jan 21 23:25:47 compute-0 systemd[1]: libpod-388b66bf5fcd792634d50038f8ed938fb31605fc61fe24ed296473e6bc410878.scope: Deactivated successfully.
Jan 21 23:25:47 compute-0 podman[87663]: 2026-01-21 23:25:47.876822502 +0000 UTC m=+6.016292108 container died 388b66bf5fcd792634d50038f8ed938fb31605fc61fe24ed296473e6bc410878 (image=quay.io/ceph/ceph:v18, name=youthful_ardinghelli, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:25:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b2e6721d4a27be53a3621c16fac2995c682d6446481b3d9337560f976e8743e-merged.mount: Deactivated successfully.
Jan 21 23:25:48 compute-0 systemd[75939]: Starting Mark boot as successful...
Jan 21 23:25:48 compute-0 systemd[75939]: Finished Mark boot as successful.
Jan 21 23:25:48 compute-0 podman[87663]: 2026-01-21 23:25:48.023956019 +0000 UTC m=+6.163425625 container remove 388b66bf5fcd792634d50038f8ed938fb31605fc61fe24ed296473e6bc410878 (image=quay.io/ceph/ceph:v18, name=youthful_ardinghelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:25:48 compute-0 systemd[1]: libpod-conmon-388b66bf5fcd792634d50038f8ed938fb31605fc61fe24ed296473e6bc410878.scope: Deactivated successfully.
Jan 21 23:25:48 compute-0 sudo[87660]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:48 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.2 deep-scrub starts
Jan 21 23:25:48 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 26 pg[7.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [1] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:25:48 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.2 deep-scrub ok
Jan 21 23:25:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 21 23:25:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Jan 21 23:25:48 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1558337394; not ready for session (expect reconnect)
Jan 21 23:25:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 21 23:25:48 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:48 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 21 23:25:48 compute-0 ceph-mon[74318]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 21 23:25:48 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 21 23:25:48 compute-0 ceph-mon[74318]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 21 23:25:48 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:48 compute-0 ceph-mon[74318]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 21 23:25:48 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 23:25:48 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 21 23:25:48 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 21 23:25:48 compute-0 sudo[87744]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixviwlripdhsatdgwhpkdrbqxtmejxxs ; /usr/bin/python3'
Jan 21 23:25:48 compute-0 ceph-mon[74318]: paxos.0).electionLogic(10) init, last seen epoch 10
Jan 21 23:25:48 compute-0 sudo[87744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:25:48 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 23:25:48 compute-0 python3[87746]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:25:48 compute-0 podman[87747]: 2026-01-21 23:25:48.452346289 +0000 UTC m=+0.046888928 container create 2699c1eada16428f71f4c5e4b9ae3d98341c593e654b761291ec196f91aadf5f (image=quay.io/ceph/ceph:v18, name=pedantic_clarke, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:25:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v84: 69 pgs: 1 creating+peering, 1 peering, 63 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:48 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 21 23:25:48 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:48 compute-0 systemd[1]: Started libpod-conmon-2699c1eada16428f71f4c5e4b9ae3d98341c593e654b761291ec196f91aadf5f.scope.
Jan 21 23:25:48 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 21 23:25:48 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ef549e4f0dd77649acb7f249f768efb6f7689d5dff4e705e7c462bf0d8fd6d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ef549e4f0dd77649acb7f249f768efb6f7689d5dff4e705e7c462bf0d8fd6d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:48 compute-0 podman[87747]: 2026-01-21 23:25:48.434191415 +0000 UTC m=+0.028734064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:25:48 compute-0 podman[87747]: 2026-01-21 23:25:48.540419884 +0000 UTC m=+0.134962543 container init 2699c1eada16428f71f4c5e4b9ae3d98341c593e654b761291ec196f91aadf5f (image=quay.io/ceph/ceph:v18, name=pedantic_clarke, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:25:48 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:25:48 compute-0 podman[87747]: 2026-01-21 23:25:48.550551388 +0000 UTC m=+0.145094057 container start 2699c1eada16428f71f4c5e4b9ae3d98341c593e654b761291ec196f91aadf5f (image=quay.io/ceph/ceph:v18, name=pedantic_clarke, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 21 23:25:48 compute-0 podman[87747]: 2026-01-21 23:25:48.554514567 +0000 UTC m=+0.149057216 container attach 2699c1eada16428f71f4c5e4b9ae3d98341c593e654b761291ec196f91aadf5f (image=quay.io/ceph/ceph:v18, name=pedantic_clarke, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 23:25:48 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:25:48.615+0000 7fbf53a93640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Jan 21 23:25:48 compute-0 ceph-mgr[74614]: mgr.server handle_report got status from non-daemon mon.compute-2
Jan 21 23:25:48 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 21 23:25:48 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 21 23:25:49 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 21 23:25:49 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 21 23:25:49 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1558337394; not ready for session (expect reconnect)
Jan 21 23:25:49 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 21 23:25:49 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:49 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 21 23:25:49 compute-0 ceph-mgr[74614]: [progress INFO root] Writing back 6 completed events
Jan 21 23:25:49 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 21 23:25:49 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 21 23:25:49 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 21 23:25:49 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 21 23:25:49 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 21 23:25:50 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 21 23:25:50 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 21 23:25:50 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1558337394; not ready for session (expect reconnect)
Jan 21 23:25:50 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 21 23:25:50 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:50 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 21 23:25:50 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 21 23:25:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v85: 69 pgs: 1 creating+peering, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:50 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 21 23:25:50 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:51 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 21 23:25:51 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 21 23:25:51 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1558337394; not ready for session (expect reconnect)
Jan 21 23:25:51 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 21 23:25:51 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:51 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 21 23:25:51 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 21 23:25:51 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 21 23:25:51 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 21 23:25:52 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 21 23:25:52 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 21 23:25:52 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 21 23:25:52 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 21 23:25:52 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1558337394; not ready for session (expect reconnect)
Jan 21 23:25:52 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 21 23:25:52 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:52 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 21 23:25:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v86: 69 pgs: 1 creating+peering, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:52 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 21 23:25:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:52 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 21 23:25:52 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 21 23:25:53 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1558337394; not ready for session (expect reconnect)
Jan 21 23:25:53 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:53 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 21 23:25:53 compute-0 ceph-mon[74318]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Jan 21 23:25:53 compute-0 ceph-mon[74318]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 21 23:25:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : fsmap 
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.boqcsl(active, since 2m)
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 5 pool(s) do not have an application enabled
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 5 pool(s) do not have an application enabled
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] :     application not enabled on pool 'vms'
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] :     application not enabled on pool 'volumes'
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 23:25:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:25:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Jan 21 23:25:53 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 27 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [1] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:25:53 compute-0 ceph-mon[74318]: 2.2 deep-scrub starts
Jan 21 23:25:53 compute-0 ceph-mon[74318]: 2.2 deep-scrub ok
Jan 21 23:25:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 21 23:25:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 21 23:25:53 compute-0 ceph-mon[74318]: mon.compute-0 calling monitor election
Jan 21 23:25:53 compute-0 ceph-mon[74318]: mon.compute-2 calling monitor election
Jan 21 23:25:53 compute-0 ceph-mon[74318]: pgmap v84: 69 pgs: 1 creating+peering, 1 peering, 63 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:53 compute-0 ceph-mon[74318]: 2.3 scrub starts
Jan 21 23:25:53 compute-0 ceph-mon[74318]: 2.3 scrub ok
Jan 21 23:25:53 compute-0 ceph-mon[74318]: 2.4 scrub starts
Jan 21 23:25:53 compute-0 ceph-mon[74318]: 2.4 scrub ok
Jan 21 23:25:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:53 compute-0 ceph-mon[74318]: mon.compute-1 calling monitor election
Jan 21 23:25:53 compute-0 ceph-mon[74318]: pgmap v85: 69 pgs: 1 creating+peering, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:53 compute-0 ceph-mon[74318]: 2.5 scrub starts
Jan 21 23:25:53 compute-0 ceph-mon[74318]: 2.5 scrub ok
Jan 21 23:25:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:53 compute-0 ceph-mon[74318]: 2.6 scrub starts
Jan 21 23:25:53 compute-0 ceph-mon[74318]: 2.6 scrub ok
Jan 21 23:25:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:53 compute-0 ceph-mon[74318]: pgmap v86: 69 pgs: 1 creating+peering, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:53 compute-0 ceph-mon[74318]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 21 23:25:53 compute-0 ceph-mon[74318]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 21 23:25:53 compute-0 ceph-mon[74318]: fsmap 
Jan 21 23:25:53 compute-0 ceph-mon[74318]: osdmap e26: 2 total, 2 up, 2 in
Jan 21 23:25:53 compute-0 ceph-mon[74318]: mgrmap e8: compute-0.boqcsl(active, since 2m)
Jan 21 23:25:53 compute-0 ceph-mon[74318]: Health detail: HEALTH_WARN 5 pool(s) do not have an application enabled
Jan 21 23:25:53 compute-0 ceph-mon[74318]: [WRN] POOL_APP_NOT_ENABLED: 5 pool(s) do not have an application enabled
Jan 21 23:25:53 compute-0 ceph-mon[74318]:     application not enabled on pool 'vms'
Jan 21 23:25:53 compute-0 ceph-mon[74318]:     application not enabled on pool 'volumes'
Jan 21 23:25:53 compute-0 ceph-mon[74318]:     application not enabled on pool 'backups'
Jan 21 23:25:53 compute-0 ceph-mon[74318]:     application not enabled on pool 'images'
Jan 21 23:25:53 compute-0 ceph-mon[74318]:     application not enabled on pool 'cephfs.cephfs.meta'
Jan 21 23:25:53 compute-0 ceph-mon[74318]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 21 23:25:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.ihmngr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.ihmngr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.ihmngr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 21 23:25:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 21 23:25:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:25:53 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.ihmngr on compute-1
Jan 21 23:25:53 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.ihmngr on compute-1
Jan 21 23:25:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Jan 21 23:25:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2493779774' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 21 23:25:54 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 21 23:25:54 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 21 23:25:54 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1558337394; not ready for session (expect reconnect)
Jan 21 23:25:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 21 23:25:54 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 21 23:25:54 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2493779774' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 21 23:25:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e28 e28: 2 total, 2 up, 2 in
Jan 21 23:25:54 compute-0 pedantic_clarke[87760]: enabled application 'rbd' on pool 'vms'
Jan 21 23:25:54 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Jan 21 23:25:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:54 compute-0 ceph-mon[74318]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 23:25:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:25:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:25:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:25:54 compute-0 ceph-mon[74318]: osdmap e27: 2 total, 2 up, 2 in
Jan 21 23:25:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.ihmngr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 23:25:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.ihmngr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 21 23:25:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 21 23:25:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:25:54 compute-0 ceph-mon[74318]: Deploying daemon mgr.compute-1.ihmngr on compute-1
Jan 21 23:25:54 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2493779774' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 21 23:25:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 21 23:25:54 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2493779774' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 21 23:25:54 compute-0 ceph-mon[74318]: osdmap e28: 2 total, 2 up, 2 in
Jan 21 23:25:54 compute-0 systemd[1]: libpod-2699c1eada16428f71f4c5e4b9ae3d98341c593e654b761291ec196f91aadf5f.scope: Deactivated successfully.
Jan 21 23:25:54 compute-0 podman[87747]: 2026-01-21 23:25:54.368595583 +0000 UTC m=+5.963138222 container died 2699c1eada16428f71f4c5e4b9ae3d98341c593e654b761291ec196f91aadf5f (image=quay.io/ceph/ceph:v18, name=pedantic_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:25:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4ef549e4f0dd77649acb7f249f768efb6f7689d5dff4e705e7c462bf0d8fd6d-merged.mount: Deactivated successfully.
Jan 21 23:25:54 compute-0 podman[87747]: 2026-01-21 23:25:54.420085088 +0000 UTC m=+6.014627737 container remove 2699c1eada16428f71f4c5e4b9ae3d98341c593e654b761291ec196f91aadf5f (image=quay.io/ceph/ceph:v18, name=pedantic_clarke, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 21 23:25:54 compute-0 systemd[1]: libpod-conmon-2699c1eada16428f71f4c5e4b9ae3d98341c593e654b761291ec196f91aadf5f.scope: Deactivated successfully.
Jan 21 23:25:54 compute-0 sudo[87744]: pam_unix(sudo:session): session closed for user root
Jan 21 23:25:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v89: 100 pgs: 31 unknown, 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:54 compute-0 sudo[87822]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxzjzrjvdebcwsgofipzvnkacqvwatew ; /usr/bin/python3'
Jan 21 23:25:54 compute-0 sudo[87822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:25:54 compute-0 python3[87824]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:25:54 compute-0 podman[87825]: 2026-01-21 23:25:54.844926233 +0000 UTC m=+0.065185248 container create 12f84e0f0f03b3805d0271c15c44172f2e38f6faf66c4440fd2318c0f41387ad (image=quay.io/ceph/ceph:v18, name=suspicious_euler, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:25:54 compute-0 systemd[1]: Started libpod-conmon-12f84e0f0f03b3805d0271c15c44172f2e38f6faf66c4440fd2318c0f41387ad.scope.
Jan 21 23:25:54 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:25:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ea7289b8cc5c1909a89ad3a657b67e0767248f763a4961ebad618e61622a5de/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ea7289b8cc5c1909a89ad3a657b67e0767248f763a4961ebad618e61622a5de/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:25:54 compute-0 podman[87825]: 2026-01-21 23:25:54.914357367 +0000 UTC m=+0.134616412 container init 12f84e0f0f03b3805d0271c15c44172f2e38f6faf66c4440fd2318c0f41387ad (image=quay.io/ceph/ceph:v18, name=suspicious_euler, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 21 23:25:54 compute-0 podman[87825]: 2026-01-21 23:25:54.822990935 +0000 UTC m=+0.043249970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:25:54 compute-0 podman[87825]: 2026-01-21 23:25:54.920869293 +0000 UTC m=+0.141128308 container start 12f84e0f0f03b3805d0271c15c44172f2e38f6faf66c4440fd2318c0f41387ad (image=quay.io/ceph/ceph:v18, name=suspicious_euler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:25:54 compute-0 podman[87825]: 2026-01-21 23:25:54.924080779 +0000 UTC m=+0.144339824 container attach 12f84e0f0f03b3805d0271c15c44172f2e38f6faf66c4440fd2318c0f41387ad (image=quay.io/ceph/ceph:v18, name=suspicious_euler, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 23:25:55 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-21T23:25:55.222+0000 7fbf53a93640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Jan 21 23:25:55 compute-0 ceph-mgr[74614]: mgr.server handle_report got status from non-daemon mon.compute-1
Jan 21 23:25:55 compute-0 ceph-mon[74318]: 2.7 scrub starts
Jan 21 23:25:55 compute-0 ceph-mon[74318]: 2.7 scrub ok
Jan 21 23:25:55 compute-0 ceph-mon[74318]: 3.1 scrub starts
Jan 21 23:25:55 compute-0 ceph-mon[74318]: 3.1 scrub ok
Jan 21 23:25:55 compute-0 ceph-mon[74318]: pgmap v89: 100 pgs: 31 unknown, 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Jan 21 23:25:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3150021355' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 21 23:25:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:25:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:25:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 21 23:25:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:55 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev f4931e02-289f-4789-99b7-8f05c3cc6189 (Updating mgr deployment (+2 -> 3))
Jan 21 23:25:55 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event f4931e02-289f-4789-99b7-8f05c3cc6189 (Updating mgr deployment (+2 -> 3)) in 9 seconds
Jan 21 23:25:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 21 23:25:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:55 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev bbd2f02e-e76f-4c0b-b6a9-93230a0b9d8f (Updating crash deployment (+1 -> 3))
Jan 21 23:25:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 21 23:25:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 23:25:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 21 23:25:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:25:55 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:25:55 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Jan 21 23:25:55 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Jan 21 23:25:56 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 21 23:25:56 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 21 23:25:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 21 23:25:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v90: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 21 23:25:56 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 21 23:25:56 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 21 23:25:56 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:56 compute-0 ceph-mon[74318]: 3.2 scrub starts
Jan 21 23:25:56 compute-0 ceph-mon[74318]: 3.2 scrub ok
Jan 21 23:25:56 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3150021355' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 21 23:25:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:25:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 23:25:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 21 23:25:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:25:56 compute-0 ceph-mon[74318]: Deploying daemon crash.compute-2 on compute-2
Jan 21 23:25:58 compute-0 ceph-mgr[74614]: [progress INFO root] Writing back 7 completed events
Jan 21 23:25:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 21 23:25:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v91: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:25:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 21 23:25:58 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 21 23:25:58 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:25:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 21 23:25:58 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3150021355' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 21 23:26:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Jan 21 23:26:00 compute-0 suspicious_euler[87840]: enabled application 'rbd' on pool 'volumes'
Jan 21 23:26:00 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Jan 21 23:26:00 compute-0 ceph-mon[74318]: 2.8 scrub starts
Jan 21 23:26:00 compute-0 ceph-mon[74318]: 2.8 scrub ok
Jan 21 23:26:00 compute-0 ceph-mon[74318]: 3.3 deep-scrub starts
Jan 21 23:26:00 compute-0 ceph-mon[74318]: 3.3 deep-scrub ok
Jan 21 23:26:00 compute-0 ceph-mon[74318]: pgmap v90: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:00 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:00 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:00 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:00 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event baf11cf3-7166-4b9e-b1c7-e0b3adba562c (Global Recovery Event) in 21 seconds
Jan 21 23:26:00 compute-0 systemd[1]: libpod-12f84e0f0f03b3805d0271c15c44172f2e38f6faf66c4440fd2318c0f41387ad.scope: Deactivated successfully.
Jan 21 23:26:00 compute-0 podman[87825]: 2026-01-21 23:26:00.22799821 +0000 UTC m=+5.448257235 container died 12f84e0f0f03b3805d0271c15c44172f2e38f6faf66c4440fd2318c0f41387ad (image=quay.io/ceph/ceph:v18, name=suspicious_euler, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:26:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ea7289b8cc5c1909a89ad3a657b67e0767248f763a4961ebad618e61622a5de-merged.mount: Deactivated successfully.
Jan 21 23:26:00 compute-0 podman[87825]: 2026-01-21 23:26:00.275038242 +0000 UTC m=+5.495297287 container remove 12f84e0f0f03b3805d0271c15c44172f2e38f6faf66c4440fd2318c0f41387ad (image=quay.io/ceph/ceph:v18, name=suspicious_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:26:00 compute-0 systemd[1]: libpod-conmon-12f84e0f0f03b3805d0271c15c44172f2e38f6faf66c4440fd2318c0f41387ad.scope: Deactivated successfully.
Jan 21 23:26:00 compute-0 sudo[87822]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:00 compute-0 sudo[87898]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccxfhjgbqsubxuzjoawxrbsvyzzrbhkl ; /usr/bin/python3'
Jan 21 23:26:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v93: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 21 23:26:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 21 23:26:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 21 23:26:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:00 compute-0 sudo[87898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:00 compute-0 python3[87900]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:00 compute-0 podman[87901]: 2026-01-21 23:26:00.713167586 +0000 UTC m=+0.069145927 container create 3043d3b3bca9f3987ddd959f4900564445d7368db5379425029395fd4092826c (image=quay.io/ceph/ceph:v18, name=determined_hopper, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 21 23:26:00 compute-0 systemd[1]: Started libpod-conmon-3043d3b3bca9f3987ddd959f4900564445d7368db5379425029395fd4092826c.scope.
Jan 21 23:26:00 compute-0 podman[87901]: 2026-01-21 23:26:00.687669201 +0000 UTC m=+0.043647512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:00 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2303ece07e81718b088c02511f31b074fa96e96b634e8986268f4f95f27636/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2303ece07e81718b088c02511f31b074fa96e96b634e8986268f4f95f27636/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:00 compute-0 podman[87901]: 2026-01-21 23:26:00.802287601 +0000 UTC m=+0.158265932 container init 3043d3b3bca9f3987ddd959f4900564445d7368db5379425029395fd4092826c (image=quay.io/ceph/ceph:v18, name=determined_hopper, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:00 compute-0 podman[87901]: 2026-01-21 23:26:00.808815197 +0000 UTC m=+0.164793428 container start 3043d3b3bca9f3987ddd959f4900564445d7368db5379425029395fd4092826c (image=quay.io/ceph/ceph:v18, name=determined_hopper, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:00 compute-0 podman[87901]: 2026-01-21 23:26:00.8116026 +0000 UTC m=+0.167580851 container attach 3043d3b3bca9f3987ddd959f4900564445d7368db5379425029395fd4092826c (image=quay.io/ceph/ceph:v18, name=determined_hopper, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:26:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.1e( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.514321327s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 61.702873230s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.1e( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.514261246s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.702873230s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.a( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.514174461s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 61.702915192s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.a( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.514077187s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.702915192s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.6( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513977051s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 61.703418732s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.6( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513911247s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.703418732s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.1( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513522148s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 61.703441620s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.1( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513499260s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.703441620s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.1f( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.509738922s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 61.699645996s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.c( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513411522s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 61.703475952s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.1f( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.509590149s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.699645996s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.9( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513747215s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 61.703758240s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.d( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513646126s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 61.703849792s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.c( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513227463s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.703475952s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.e( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513462067s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 61.703777313s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.4( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513110161s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 61.703453064s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.d( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513391495s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.703849792s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.e( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513204575s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.703777313s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.9( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513324738s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.703758240s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.10( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513168335s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 61.703895569s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.10( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513143539s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.703895569s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.13( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513416290s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 61.704330444s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.15( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513248444s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 61.704196930s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.15( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513225555s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.704196930s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.13( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513391495s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.704330444s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.1b( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.513108253s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 61.704265594s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.1b( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.512895584s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.704265594s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.19( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.512675285s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 61.704269409s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.19( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.512649536s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.704269409s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[2.4( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.511721611s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.703453064s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[4.1f( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[3.16( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[3.15( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[3.14( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[4.13( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[3.13( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[4.15( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[3.11( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[4.8( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[3.f( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[3.e( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[4.a( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[3.c( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[3.d( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[3.a( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[4.d( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[4.c( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[3.5( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[4.1( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-mon[74318]: pgmap v91: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:01 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3150021355' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 21 23:26:01 compute-0 ceph-mon[74318]: osdmap e29: 2 total, 2 up, 2 in
Jan 21 23:26:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:01 compute-0 ceph-mon[74318]: pgmap v93: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[4.5( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[3.3( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[3.9( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[4.e( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[3.1a( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[4.1b( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[3.1d( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[4.9( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[4.18( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[4.1a( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[3.1c( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 30 pg[3.10( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:01 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev bbd2f02e-e76f-4c0b-b6a9-93230a0b9d8f (Updating crash deployment (+1 -> 3))
Jan 21 23:26:01 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event bbd2f02e-e76f-4c0b-b6a9-93230a0b9d8f (Updating crash deployment (+1 -> 3)) in 6 seconds
Jan 21 23:26:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:26:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:26:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:26:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Jan 21 23:26:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2292589030' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 21 23:26:01 compute-0 sudo[87940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:01 compute-0 sudo[87940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:01 compute-0 sudo[87940]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:01 compute-0 sudo[87966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:26:01 compute-0 sudo[87966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:01 compute-0 sudo[87966]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:01 compute-0 sudo[87991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:01 compute-0 sudo[87991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:01 compute-0 sudo[87991]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:01 compute-0 sudo[88016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:26:01 compute-0 sudo[88016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:26:02 compute-0 podman[88081]: 2026-01-21 23:26:02.17159952 +0000 UTC m=+0.066744795 container create 85ed27d7b7e8edf339e4730c02f6217c5cc2ab1ec094333dbf0f16063122e9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 21 23:26:02 compute-0 systemd[1]: Started libpod-conmon-85ed27d7b7e8edf339e4730c02f6217c5cc2ab1ec094333dbf0f16063122e9f7.scope.
Jan 21 23:26:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 21 23:26:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2292589030' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 21 23:26:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e31 e31: 2 total, 2 up, 2 in
Jan 21 23:26:02 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:02 compute-0 determined_hopper[87917]: enabled application 'rbd' on pool 'backups'
Jan 21 23:26:02 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Jan 21 23:26:02 compute-0 podman[88081]: 2026-01-21 23:26:02.146190807 +0000 UTC m=+0.041336122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[3.1a( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[3.16( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[3.14( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[4.13( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[3.15( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[4.15( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[3.13( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[3.10( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[4.9( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[3.11( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[3.e( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[4.8( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[3.f( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 podman[88081]: 2026-01-21 23:26:02.257016044 +0000 UTC m=+0.152161309 container init 85ed27d7b7e8edf339e4730c02f6217c5cc2ab1ec094333dbf0f16063122e9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[4.a( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[3.c( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[3.d( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[4.5( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[3.5( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[3.3( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[3.9( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[4.e( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[4.c( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[4.1( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[3.a( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[4.1a( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[4.d( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[3.1d( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[3.1c( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=30) [1] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[4.1b( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[4.18( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 31 pg[4.1f( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:02 compute-0 podman[88081]: 2026-01-21 23:26:02.264358645 +0000 UTC m=+0.159503890 container start 85ed27d7b7e8edf339e4730c02f6217c5cc2ab1ec094333dbf0f16063122e9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 21 23:26:02 compute-0 systemd[1]: libpod-3043d3b3bca9f3987ddd959f4900564445d7368db5379425029395fd4092826c.scope: Deactivated successfully.
Jan 21 23:26:02 compute-0 podman[88081]: 2026-01-21 23:26:02.269697275 +0000 UTC m=+0.164842610 container attach 85ed27d7b7e8edf339e4730c02f6217c5cc2ab1ec094333dbf0f16063122e9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:02 compute-0 trusting_rhodes[88097]: 167 167
Jan 21 23:26:02 compute-0 systemd[1]: libpod-85ed27d7b7e8edf339e4730c02f6217c5cc2ab1ec094333dbf0f16063122e9f7.scope: Deactivated successfully.
Jan 21 23:26:02 compute-0 podman[87901]: 2026-01-21 23:26:02.271113637 +0000 UTC m=+1.627091938 container died 3043d3b3bca9f3987ddd959f4900564445d7368db5379425029395fd4092826c (image=quay.io/ceph/ceph:v18, name=determined_hopper, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:02 compute-0 podman[88081]: 2026-01-21 23:26:02.271114897 +0000 UTC m=+0.166260132 container died 85ed27d7b7e8edf339e4730c02f6217c5cc2ab1ec094333dbf0f16063122e9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:26:02 compute-0 ceph-mon[74318]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:26:02 compute-0 ceph-mon[74318]: osdmap e30: 2 total, 2 up, 2 in
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2292589030' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 21 23:26:02 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2292589030' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 21 23:26:02 compute-0 ceph-mon[74318]: osdmap e31: 2 total, 2 up, 2 in
Jan 21 23:26:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-af2303ece07e81718b088c02511f31b074fa96e96b634e8986268f4f95f27636-merged.mount: Deactivated successfully.
Jan 21 23:26:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-64dca94922895242b964cd625c6ba0a131dfd35e47c3c97cdb781c57ece60efc-merged.mount: Deactivated successfully.
Jan 21 23:26:02 compute-0 podman[88081]: 2026-01-21 23:26:02.334308835 +0000 UTC m=+0.229454070 container remove 85ed27d7b7e8edf339e4730c02f6217c5cc2ab1ec094333dbf0f16063122e9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 23:26:02 compute-0 systemd[1]: libpod-conmon-85ed27d7b7e8edf339e4730c02f6217c5cc2ab1ec094333dbf0f16063122e9f7.scope: Deactivated successfully.
Jan 21 23:26:02 compute-0 podman[87901]: 2026-01-21 23:26:02.345785459 +0000 UTC m=+1.701763700 container remove 3043d3b3bca9f3987ddd959f4900564445d7368db5379425029395fd4092826c (image=quay.io/ceph/ceph:v18, name=determined_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:26:02 compute-0 systemd[1]: libpod-conmon-3043d3b3bca9f3987ddd959f4900564445d7368db5379425029395fd4092826c.scope: Deactivated successfully.
Jan 21 23:26:02 compute-0 sudo[87898]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v96: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:02 compute-0 podman[88135]: 2026-01-21 23:26:02.50239356 +0000 UTC m=+0.052328731 container create 8acf77c90af32e00c8b7d72790de3d6f1936c5e3cec34ac669ea876036b0e29d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:02 compute-0 sudo[88172]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbvgbabgxdnjgzqkuqepgitefamkhgim ; /usr/bin/python3'
Jan 21 23:26:02 compute-0 sudo[88172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:02 compute-0 podman[88135]: 2026-01-21 23:26:02.480361459 +0000 UTC m=+0.030296620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:26:02 compute-0 systemd[1]: Started libpod-conmon-8acf77c90af32e00c8b7d72790de3d6f1936c5e3cec34ac669ea876036b0e29d.scope.
Jan 21 23:26:02 compute-0 python3[88174]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:02 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06989cf790cae040c2f1f7faff3e291d3293c049b364dff37a69a537d43f5e58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06989cf790cae040c2f1f7faff3e291d3293c049b364dff37a69a537d43f5e58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06989cf790cae040c2f1f7faff3e291d3293c049b364dff37a69a537d43f5e58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06989cf790cae040c2f1f7faff3e291d3293c049b364dff37a69a537d43f5e58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06989cf790cae040c2f1f7faff3e291d3293c049b364dff37a69a537d43f5e58/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:02 compute-0 podman[88179]: 2026-01-21 23:26:02.73320908 +0000 UTC m=+0.030277650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:03 compute-0 podman[88135]: 2026-01-21 23:26:03.14223593 +0000 UTC m=+0.692171161 container init 8acf77c90af32e00c8b7d72790de3d6f1936c5e3cec34ac669ea876036b0e29d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 21 23:26:03 compute-0 podman[88135]: 2026-01-21 23:26:03.154814997 +0000 UTC m=+0.704750148 container start 8acf77c90af32e00c8b7d72790de3d6f1936c5e3cec34ac669ea876036b0e29d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Jan 21 23:26:03 compute-0 podman[88135]: 2026-01-21 23:26:03.300014267 +0000 UTC m=+0.849949438 container attach 8acf77c90af32e00c8b7d72790de3d6f1936c5e3cec34ac669ea876036b0e29d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 21 23:26:03 compute-0 podman[88179]: 2026-01-21 23:26:03.843948826 +0000 UTC m=+1.141017416 container create aa7e86aad45354f72f4fdd7d540153b86b55a59a62d6d7dbf4a9104a7aa88a36 (image=quay.io/ceph/ceph:v18, name=keen_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 21 23:26:03 compute-0 ceph-mon[74318]: pgmap v96: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:03 compute-0 kind_jang[88177]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:26:03 compute-0 kind_jang[88177]: --> relative data size: 1.0
Jan 21 23:26:03 compute-0 kind_jang[88177]: --> All data devices are unavailable
Jan 21 23:26:04 compute-0 systemd[1]: libpod-8acf77c90af32e00c8b7d72790de3d6f1936c5e3cec34ac669ea876036b0e29d.scope: Deactivated successfully.
Jan 21 23:26:04 compute-0 podman[88135]: 2026-01-21 23:26:04.111140208 +0000 UTC m=+1.661075379 container died 8acf77c90af32e00c8b7d72790de3d6f1936c5e3cec34ac669ea876036b0e29d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:04 compute-0 systemd[1]: Started libpod-conmon-aa7e86aad45354f72f4fdd7d540153b86b55a59a62d6d7dbf4a9104a7aa88a36.scope.
Jan 21 23:26:04 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9af934b52d5056b5978be933d65e95363e76c4ccd39b3ce82ecd7e906ff3515/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9af934b52d5056b5978be933d65e95363e76c4ccd39b3ce82ecd7e906ff3515/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:04 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "19caaef7-f327-458f-89fe-fcabf7ccafa4"} v 0) v1
Jan 21 23:26:04 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "19caaef7-f327-458f-89fe-fcabf7ccafa4"}]: dispatch
Jan 21 23:26:04 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 21 23:26:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v97: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-06989cf790cae040c2f1f7faff3e291d3293c049b364dff37a69a537d43f5e58-merged.mount: Deactivated successfully.
Jan 21 23:26:05 compute-0 podman[88179]: 2026-01-21 23:26:05.014199779 +0000 UTC m=+2.311268419 container init aa7e86aad45354f72f4fdd7d540153b86b55a59a62d6d7dbf4a9104a7aa88a36 (image=quay.io/ceph/ceph:v18, name=keen_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:26:05 compute-0 podman[88179]: 2026-01-21 23:26:05.023045394 +0000 UTC m=+2.320113984 container start aa7e86aad45354f72f4fdd7d540153b86b55a59a62d6d7dbf4a9104a7aa88a36 (image=quay.io/ceph/ceph:v18, name=keen_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 23:26:05 compute-0 podman[88179]: 2026-01-21 23:26:05.02987242 +0000 UTC m=+2.326940980 container attach aa7e86aad45354f72f4fdd7d540153b86b55a59a62d6d7dbf4a9104a7aa88a36 (image=quay.io/ceph/ceph:v18, name=keen_noether, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 23:26:05 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "19caaef7-f327-458f-89fe-fcabf7ccafa4"}]': finished
Jan 21 23:26:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Jan 21 23:26:05 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Jan 21 23:26:05 compute-0 podman[88205]: 2026-01-21 23:26:05.039941802 +0000 UTC m=+1.015782436 container remove 8acf77c90af32e00c8b7d72790de3d6f1936c5e3cec34ac669ea876036b0e29d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 21 23:26:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 21 23:26:05 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:05 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 23:26:05 compute-0 systemd[1]: libpod-conmon-8acf77c90af32e00c8b7d72790de3d6f1936c5e3cec34ac669ea876036b0e29d.scope: Deactivated successfully.
Jan 21 23:26:05 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/555514715' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "19caaef7-f327-458f-89fe-fcabf7ccafa4"}]: dispatch
Jan 21 23:26:05 compute-0 ceph-mon[74318]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "19caaef7-f327-458f-89fe-fcabf7ccafa4"}]: dispatch
Jan 21 23:26:05 compute-0 sudo[88016]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:05 compute-0 sudo[88224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:05 compute-0 sudo[88224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:05 compute-0 sudo[88224]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:05 compute-0 ceph-mgr[74614]: [progress INFO root] Writing back 9 completed events
Jan 21 23:26:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 21 23:26:05 compute-0 sudo[88249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:26:05 compute-0 sudo[88249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:05 compute-0 sudo[88249]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:05 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:05 compute-0 sudo[88274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:05 compute-0 sudo[88274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:05 compute-0 sudo[88274]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:05 compute-0 sudo[88299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:26:05 compute-0 sudo[88299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Jan 21 23:26:05 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3870437960' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 21 23:26:05 compute-0 podman[88384]: 2026-01-21 23:26:05.761738861 +0000 UTC m=+0.054247229 container create 34680f30aa84746a9142e032b6a441967932155cffb194de4fe721a47ce46182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_tu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 23:26:05 compute-0 systemd[1]: Started libpod-conmon-34680f30aa84746a9142e032b6a441967932155cffb194de4fe721a47ce46182.scope.
Jan 21 23:26:05 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:05 compute-0 podman[88384]: 2026-01-21 23:26:05.73536373 +0000 UTC m=+0.027872198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:26:05 compute-0 podman[88384]: 2026-01-21 23:26:05.835259778 +0000 UTC m=+0.127768206 container init 34680f30aa84746a9142e032b6a441967932155cffb194de4fe721a47ce46182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:05 compute-0 podman[88384]: 2026-01-21 23:26:05.845720882 +0000 UTC m=+0.138229290 container start 34680f30aa84746a9142e032b6a441967932155cffb194de4fe721a47ce46182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_tu, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:26:05 compute-0 reverent_tu[88401]: 167 167
Jan 21 23:26:05 compute-0 systemd[1]: libpod-34680f30aa84746a9142e032b6a441967932155cffb194de4fe721a47ce46182.scope: Deactivated successfully.
Jan 21 23:26:05 compute-0 conmon[88401]: conmon 34680f30aa84746a9142 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34680f30aa84746a9142e032b6a441967932155cffb194de4fe721a47ce46182.scope/container/memory.events
Jan 21 23:26:05 compute-0 podman[88384]: 2026-01-21 23:26:05.849876127 +0000 UTC m=+0.142384535 container attach 34680f30aa84746a9142e032b6a441967932155cffb194de4fe721a47ce46182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_tu, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 21 23:26:05 compute-0 podman[88384]: 2026-01-21 23:26:05.850905398 +0000 UTC m=+0.143413776 container died 34680f30aa84746a9142e032b6a441967932155cffb194de4fe721a47ce46182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 21 23:26:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-da9c220a17520fb72f775aa312c230e1965b3d8e9626d23dcee1bce82343d7a2-merged.mount: Deactivated successfully.
Jan 21 23:26:05 compute-0 podman[88384]: 2026-01-21 23:26:05.888434865 +0000 UTC m=+0.180943243 container remove 34680f30aa84746a9142e032b6a441967932155cffb194de4fe721a47ce46182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_tu, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:26:05 compute-0 systemd[1]: libpod-conmon-34680f30aa84746a9142e032b6a441967932155cffb194de4fe721a47ce46182.scope: Deactivated successfully.
Jan 21 23:26:06 compute-0 podman[88424]: 2026-01-21 23:26:06.092021176 +0000 UTC m=+0.024904388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:26:06 compute-0 podman[88424]: 2026-01-21 23:26:06.189812922 +0000 UTC m=+0.122696114 container create 320a26768c6387adb792fee21bcb1889834d0cff79fb4f3e39ef8d5294a7d119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Jan 21 23:26:06 compute-0 ceph-mon[74318]: 3.4 deep-scrub starts
Jan 21 23:26:06 compute-0 ceph-mon[74318]: 3.4 deep-scrub ok
Jan 21 23:26:06 compute-0 ceph-mon[74318]: pgmap v97: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:06 compute-0 ceph-mon[74318]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "19caaef7-f327-458f-89fe-fcabf7ccafa4"}]': finished
Jan 21 23:26:06 compute-0 ceph-mon[74318]: osdmap e32: 3 total, 2 up, 3 in
Jan 21 23:26:06 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:06 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:06 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3870437960' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 21 23:26:06 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1098086183' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 21 23:26:06 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 21 23:26:06 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3870437960' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 21 23:26:06 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e33 e33: 3 total, 2 up, 3 in
Jan 21 23:26:06 compute-0 keen_noether[88218]: enabled application 'rbd' on pool 'images'
Jan 21 23:26:06 compute-0 systemd[1]: Started libpod-conmon-320a26768c6387adb792fee21bcb1889834d0cff79fb4f3e39ef8d5294a7d119.scope.
Jan 21 23:26:06 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 2 up, 3 in
Jan 21 23:26:06 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 21 23:26:06 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:06 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 23:26:06 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 21 23:26:06 compute-0 podman[88179]: 2026-01-21 23:26:06.302477485 +0000 UTC m=+3.599546035 container died aa7e86aad45354f72f4fdd7d540153b86b55a59a62d6d7dbf4a9104a7aa88a36 (image=quay.io/ceph/ceph:v18, name=keen_noether, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 21 23:26:06 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:06 compute-0 systemd[1]: libpod-aa7e86aad45354f72f4fdd7d540153b86b55a59a62d6d7dbf4a9104a7aa88a36.scope: Deactivated successfully.
Jan 21 23:26:06 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 21 23:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7996b783ab20366b2eaae97fa6cd823987254425d9dab56e6a1400586b08ea1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7996b783ab20366b2eaae97fa6cd823987254425d9dab56e6a1400586b08ea1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7996b783ab20366b2eaae97fa6cd823987254425d9dab56e6a1400586b08ea1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7996b783ab20366b2eaae97fa6cd823987254425d9dab56e6a1400586b08ea1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9af934b52d5056b5978be933d65e95363e76c4ccd39b3ce82ecd7e906ff3515-merged.mount: Deactivated successfully.
Jan 21 23:26:06 compute-0 podman[88424]: 2026-01-21 23:26:06.352086304 +0000 UTC m=+0.284969596 container init 320a26768c6387adb792fee21bcb1889834d0cff79fb4f3e39ef8d5294a7d119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:26:06 compute-0 podman[88424]: 2026-01-21 23:26:06.358366653 +0000 UTC m=+0.291249875 container start 320a26768c6387adb792fee21bcb1889834d0cff79fb4f3e39ef8d5294a7d119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_haslett, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:06 compute-0 podman[88424]: 2026-01-21 23:26:06.371971221 +0000 UTC m=+0.304854433 container attach 320a26768c6387adb792fee21bcb1889834d0cff79fb4f3e39ef8d5294a7d119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_haslett, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 21 23:26:06 compute-0 podman[88179]: 2026-01-21 23:26:06.38525103 +0000 UTC m=+3.682319620 container remove aa7e86aad45354f72f4fdd7d540153b86b55a59a62d6d7dbf4a9104a7aa88a36 (image=quay.io/ceph/ceph:v18, name=keen_noether, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 23:26:06 compute-0 systemd[1]: libpod-conmon-aa7e86aad45354f72f4fdd7d540153b86b55a59a62d6d7dbf4a9104a7aa88a36.scope: Deactivated successfully.
Jan 21 23:26:06 compute-0 sudo[88172]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v100: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:06 compute-0 sudo[88482]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbeetuussvxsrzpfrctkssooptgumgpq ; /usr/bin/python3'
Jan 21 23:26:06 compute-0 sudo[88482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:06 compute-0 python3[88484]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:06 compute-0 podman[88485]: 2026-01-21 23:26:06.7426639 +0000 UTC m=+0.058318601 container create 2f367bf44d81f60e311bb8bf1070d80353032d941ffe74e6af7585b0ce432ad0 (image=quay.io/ceph/ceph:v18, name=reverent_roentgen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:06 compute-0 systemd[1]: Started libpod-conmon-2f367bf44d81f60e311bb8bf1070d80353032d941ffe74e6af7585b0ce432ad0.scope.
Jan 21 23:26:06 compute-0 podman[88485]: 2026-01-21 23:26:06.721211406 +0000 UTC m=+0.036866097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:06 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdcc6e9f83cf13655f508761fc8502c1b30e4e11f5e42b0bf97536c04aaf5ae1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdcc6e9f83cf13655f508761fc8502c1b30e4e11f5e42b0bf97536c04aaf5ae1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:06 compute-0 podman[88485]: 2026-01-21 23:26:06.837502157 +0000 UTC m=+0.153156898 container init 2f367bf44d81f60e311bb8bf1070d80353032d941ffe74e6af7585b0ce432ad0 (image=quay.io/ceph/ceph:v18, name=reverent_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:26:06 compute-0 podman[88485]: 2026-01-21 23:26:06.843821097 +0000 UTC m=+0.159475788 container start 2f367bf44d81f60e311bb8bf1070d80353032d941ffe74e6af7585b0ce432ad0 (image=quay.io/ceph/ceph:v18, name=reverent_roentgen, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:26:06 compute-0 podman[88485]: 2026-01-21 23:26:06.848104555 +0000 UTC m=+0.163759246 container attach 2f367bf44d81f60e311bb8bf1070d80353032d941ffe74e6af7585b0ce432ad0 (image=quay.io/ceph/ceph:v18, name=reverent_roentgen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 21 23:26:07 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 23:26:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:26:07 compute-0 kind_haslett[88441]: {
Jan 21 23:26:07 compute-0 kind_haslett[88441]:     "1": [
Jan 21 23:26:07 compute-0 kind_haslett[88441]:         {
Jan 21 23:26:07 compute-0 kind_haslett[88441]:             "devices": [
Jan 21 23:26:07 compute-0 kind_haslett[88441]:                 "/dev/loop3"
Jan 21 23:26:07 compute-0 kind_haslett[88441]:             ],
Jan 21 23:26:07 compute-0 kind_haslett[88441]:             "lv_name": "ceph_lv0",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:             "lv_size": "7511998464",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:             "name": "ceph_lv0",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:             "tags": {
Jan 21 23:26:07 compute-0 kind_haslett[88441]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:                 "ceph.cluster_name": "ceph",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:                 "ceph.crush_device_class": "",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:                 "ceph.encrypted": "0",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:                 "ceph.osd_id": "1",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:                 "ceph.type": "block",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:                 "ceph.vdo": "0"
Jan 21 23:26:07 compute-0 kind_haslett[88441]:             },
Jan 21 23:26:07 compute-0 kind_haslett[88441]:             "type": "block",
Jan 21 23:26:07 compute-0 kind_haslett[88441]:             "vg_name": "ceph_vg0"
Jan 21 23:26:07 compute-0 kind_haslett[88441]:         }
Jan 21 23:26:07 compute-0 kind_haslett[88441]:     ]
Jan 21 23:26:07 compute-0 kind_haslett[88441]: }
Jan 21 23:26:07 compute-0 systemd[1]: libpod-320a26768c6387adb792fee21bcb1889834d0cff79fb4f3e39ef8d5294a7d119.scope: Deactivated successfully.
Jan 21 23:26:07 compute-0 podman[88424]: 2026-01-21 23:26:07.229114404 +0000 UTC m=+1.161997606 container died 320a26768c6387adb792fee21bcb1889834d0cff79fb4f3e39ef8d5294a7d119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 21 23:26:07 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 21 23:26:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-7996b783ab20366b2eaae97fa6cd823987254425d9dab56e6a1400586b08ea1f-merged.mount: Deactivated successfully.
Jan 21 23:26:07 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 21 23:26:07 compute-0 podman[88424]: 2026-01-21 23:26:07.286141946 +0000 UTC m=+1.219025138 container remove 320a26768c6387adb792fee21bcb1889834d0cff79fb4f3e39ef8d5294a7d119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_haslett, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:07 compute-0 systemd[1]: libpod-conmon-320a26768c6387adb792fee21bcb1889834d0cff79fb4f3e39ef8d5294a7d119.scope: Deactivated successfully.
Jan 21 23:26:07 compute-0 sudo[88299]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:07 compute-0 sudo[88541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:07 compute-0 sudo[88541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:07 compute-0 sudo[88541]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Jan 21 23:26:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/302695718' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 21 23:26:07 compute-0 sudo[88566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:26:07 compute-0 sudo[88566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:07 compute-0 sudo[88566]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 21 23:26:07 compute-0 sudo[88592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:07 compute-0 sudo[88592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:07 compute-0 sudo[88592]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:07 compute-0 sudo[88617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:26:07 compute-0 sudo[88617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:08 compute-0 podman[88683]: 2026-01-21 23:26:08.01607333 +0000 UTC m=+0.071221060 container create 97d04fde193f6a5147db52cf82624ab4dac622ff02924afa71088df2fe9abc30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jang, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 21 23:26:08 compute-0 podman[88683]: 2026-01-21 23:26:07.982401689 +0000 UTC m=+0.037549489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:26:08 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 21 23:26:08 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 21 23:26:08 compute-0 systemd[1]: Started libpod-conmon-97d04fde193f6a5147db52cf82624ab4dac622ff02924afa71088df2fe9abc30.scope.
Jan 21 23:26:08 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:08 compute-0 ceph-mon[74318]: 3.6 deep-scrub starts
Jan 21 23:26:08 compute-0 ceph-mon[74318]: 3.6 deep-scrub ok
Jan 21 23:26:08 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3870437960' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 21 23:26:08 compute-0 ceph-mon[74318]: osdmap e33: 3 total, 2 up, 3 in
Jan 21 23:26:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:08 compute-0 ceph-mon[74318]: 2.b scrub starts
Jan 21 23:26:08 compute-0 ceph-mon[74318]: 2.b scrub ok
Jan 21 23:26:08 compute-0 ceph-mon[74318]: pgmap v100: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:08 compute-0 ceph-mon[74318]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 23:26:08 compute-0 podman[88683]: 2026-01-21 23:26:08.386407658 +0000 UTC m=+0.441555398 container init 97d04fde193f6a5147db52cf82624ab4dac622ff02924afa71088df2fe9abc30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jang, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 23:26:08 compute-0 podman[88683]: 2026-01-21 23:26:08.392424959 +0000 UTC m=+0.447572709 container start 97d04fde193f6a5147db52cf82624ab4dac622ff02924afa71088df2fe9abc30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 23:26:08 compute-0 gifted_jang[88701]: 167 167
Jan 21 23:26:08 compute-0 systemd[1]: libpod-97d04fde193f6a5147db52cf82624ab4dac622ff02924afa71088df2fe9abc30.scope: Deactivated successfully.
Jan 21 23:26:08 compute-0 podman[88683]: 2026-01-21 23:26:08.400850792 +0000 UTC m=+0.455998542 container attach 97d04fde193f6a5147db52cf82624ab4dac622ff02924afa71088df2fe9abc30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:26:08 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/302695718' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 21 23:26:08 compute-0 podman[88683]: 2026-01-21 23:26:08.401325135 +0000 UTC m=+0.456472855 container died 97d04fde193f6a5147db52cf82624ab4dac622ff02924afa71088df2fe9abc30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jang, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:26:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e34 e34: 3 total, 2 up, 3 in
Jan 21 23:26:08 compute-0 reverent_roentgen[88500]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 21 23:26:08 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 2 up, 3 in
Jan 21 23:26:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 21 23:26:08 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:08 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 23:26:08 compute-0 systemd[1]: libpod-2f367bf44d81f60e311bb8bf1070d80353032d941ffe74e6af7585b0ce432ad0.scope: Deactivated successfully.
Jan 21 23:26:08 compute-0 podman[88485]: 2026-01-21 23:26:08.439910184 +0000 UTC m=+1.755564875 container died 2f367bf44d81f60e311bb8bf1070d80353032d941ffe74e6af7585b0ce432ad0 (image=quay.io/ceph/ceph:v18, name=reverent_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 21 23:26:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-90d01d8dbbdb3e7125d81b0f171d71a5089257fa358a9404c132b41114e89953-merged.mount: Deactivated successfully.
Jan 21 23:26:08 compute-0 podman[88683]: 2026-01-21 23:26:08.465446351 +0000 UTC m=+0.520594081 container remove 97d04fde193f6a5147db52cf82624ab4dac622ff02924afa71088df2fe9abc30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jang, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v102: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:08 compute-0 systemd[1]: libpod-conmon-97d04fde193f6a5147db52cf82624ab4dac622ff02924afa71088df2fe9abc30.scope: Deactivated successfully.
Jan 21 23:26:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdcc6e9f83cf13655f508761fc8502c1b30e4e11f5e42b0bf97536c04aaf5ae1-merged.mount: Deactivated successfully.
Jan 21 23:26:08 compute-0 podman[88485]: 2026-01-21 23:26:08.515230725 +0000 UTC m=+1.830885386 container remove 2f367bf44d81f60e311bb8bf1070d80353032d941ffe74e6af7585b0ce432ad0 (image=quay.io/ceph/ceph:v18, name=reverent_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 23:26:08 compute-0 systemd[1]: libpod-conmon-2f367bf44d81f60e311bb8bf1070d80353032d941ffe74e6af7585b0ce432ad0.scope: Deactivated successfully.
Jan 21 23:26:08 compute-0 sudo[88482]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:08 compute-0 podman[88735]: 2026-01-21 23:26:08.637826806 +0000 UTC m=+0.041207818 container create 9f4932d4af37da8b72732923d8deb32a3206ee0d570098079837422884f30716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 21 23:26:08 compute-0 sudo[88772]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyrlywcrorugyjujwsxadbyqpzdutwal ; /usr/bin/python3'
Jan 21 23:26:08 compute-0 sudo[88772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:08 compute-0 systemd[1]: Started libpod-conmon-9f4932d4af37da8b72732923d8deb32a3206ee0d570098079837422884f30716.scope.
Jan 21 23:26:08 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bedd0b77cd6d58008f595034af295bcb3459cfcc68cec12021f1aa9a982c08aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bedd0b77cd6d58008f595034af295bcb3459cfcc68cec12021f1aa9a982c08aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bedd0b77cd6d58008f595034af295bcb3459cfcc68cec12021f1aa9a982c08aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bedd0b77cd6d58008f595034af295bcb3459cfcc68cec12021f1aa9a982c08aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:08 compute-0 podman[88735]: 2026-01-21 23:26:08.620524716 +0000 UTC m=+0.023905738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:26:08 compute-0 podman[88735]: 2026-01-21 23:26:08.722676113 +0000 UTC m=+0.126057145 container init 9f4932d4af37da8b72732923d8deb32a3206ee0d570098079837422884f30716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 23:26:08 compute-0 podman[88735]: 2026-01-21 23:26:08.732212109 +0000 UTC m=+0.135593121 container start 9f4932d4af37da8b72732923d8deb32a3206ee0d570098079837422884f30716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 23:26:08 compute-0 podman[88735]: 2026-01-21 23:26:08.735534539 +0000 UTC m=+0.138915541 container attach 9f4932d4af37da8b72732923d8deb32a3206ee0d570098079837422884f30716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilson, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 21 23:26:08 compute-0 python3[88774]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:08 compute-0 podman[88782]: 2026-01-21 23:26:08.847513751 +0000 UTC m=+0.040654251 container create 5810a7e3a7d32562fcc2dbfd96f26a1033bbc5c2aa68a7cdc4c0d5cedc142d42 (image=quay.io/ceph/ceph:v18, name=adoring_cerf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:26:08 compute-0 systemd[1]: Started libpod-conmon-5810a7e3a7d32562fcc2dbfd96f26a1033bbc5c2aa68a7cdc4c0d5cedc142d42.scope.
Jan 21 23:26:08 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea58f20aee5b31db6b649827cf11c65eb90c8c8e59231c8612530689f3d77d9e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea58f20aee5b31db6b649827cf11c65eb90c8c8e59231c8612530689f3d77d9e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:08 compute-0 podman[88782]: 2026-01-21 23:26:08.828917033 +0000 UTC m=+0.022057563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:08 compute-0 podman[88782]: 2026-01-21 23:26:08.935194903 +0000 UTC m=+0.128335423 container init 5810a7e3a7d32562fcc2dbfd96f26a1033bbc5c2aa68a7cdc4c0d5cedc142d42 (image=quay.io/ceph/ceph:v18, name=adoring_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 23:26:08 compute-0 podman[88782]: 2026-01-21 23:26:08.941208294 +0000 UTC m=+0.134348794 container start 5810a7e3a7d32562fcc2dbfd96f26a1033bbc5c2aa68a7cdc4c0d5cedc142d42 (image=quay.io/ceph/ceph:v18, name=adoring_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:08 compute-0 podman[88782]: 2026-01-21 23:26:08.944634846 +0000 UTC m=+0.137775396 container attach 5810a7e3a7d32562fcc2dbfd96f26a1033bbc5c2aa68a7cdc4c0d5cedc142d42 (image=quay.io/ceph/ceph:v18, name=adoring_cerf, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 21 23:26:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:26:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:26:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:26:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:26:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:26:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:26:09 compute-0 ceph-mon[74318]: 2.f scrub starts
Jan 21 23:26:09 compute-0 ceph-mon[74318]: 2.f scrub ok
Jan 21 23:26:09 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/302695718' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 21 23:26:09 compute-0 ceph-mon[74318]: 2.11 scrub starts
Jan 21 23:26:09 compute-0 ceph-mon[74318]: 2.11 scrub ok
Jan 21 23:26:09 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/302695718' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 21 23:26:09 compute-0 ceph-mon[74318]: osdmap e34: 3 total, 2 up, 3 in
Jan 21 23:26:09 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:09 compute-0 ceph-mon[74318]: pgmap v102: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Jan 21 23:26:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3377259321' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 21 23:26:09 compute-0 modest_wilson[88777]: {
Jan 21 23:26:09 compute-0 modest_wilson[88777]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:26:09 compute-0 modest_wilson[88777]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:26:09 compute-0 modest_wilson[88777]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:26:09 compute-0 modest_wilson[88777]:         "osd_id": 1,
Jan 21 23:26:09 compute-0 modest_wilson[88777]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:26:09 compute-0 modest_wilson[88777]:         "type": "bluestore"
Jan 21 23:26:09 compute-0 modest_wilson[88777]:     }
Jan 21 23:26:09 compute-0 modest_wilson[88777]: }
Jan 21 23:26:09 compute-0 systemd[1]: libpod-9f4932d4af37da8b72732923d8deb32a3206ee0d570098079837422884f30716.scope: Deactivated successfully.
Jan 21 23:26:09 compute-0 podman[88735]: 2026-01-21 23:26:09.696579591 +0000 UTC m=+1.099960613 container died 9f4932d4af37da8b72732923d8deb32a3206ee0d570098079837422884f30716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilson, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:26:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-bedd0b77cd6d58008f595034af295bcb3459cfcc68cec12021f1aa9a982c08aa-merged.mount: Deactivated successfully.
Jan 21 23:26:09 compute-0 podman[88735]: 2026-01-21 23:26:09.751461788 +0000 UTC m=+1.154842800 container remove 9f4932d4af37da8b72732923d8deb32a3206ee0d570098079837422884f30716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilson, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:09 compute-0 systemd[1]: libpod-conmon-9f4932d4af37da8b72732923d8deb32a3206ee0d570098079837422884f30716.scope: Deactivated successfully.
Jan 21 23:26:09 compute-0 sudo[88617]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:26:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:26:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:10 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.12 deep-scrub starts
Jan 21 23:26:10 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.12 deep-scrub ok
Jan 21 23:26:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 21 23:26:10 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3377259321' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 21 23:26:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:10 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3377259321' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 21 23:26:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e35 e35: 3 total, 2 up, 3 in
Jan 21 23:26:10 compute-0 adoring_cerf[88798]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 21 23:26:10 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 2 up, 3 in
Jan 21 23:26:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 21 23:26:10 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:10 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 23:26:10 compute-0 systemd[1]: libpod-5810a7e3a7d32562fcc2dbfd96f26a1033bbc5c2aa68a7cdc4c0d5cedc142d42.scope: Deactivated successfully.
Jan 21 23:26:10 compute-0 podman[88782]: 2026-01-21 23:26:10.454860335 +0000 UTC m=+1.648000876 container died 5810a7e3a7d32562fcc2dbfd96f26a1033bbc5c2aa68a7cdc4c0d5cedc142d42 (image=quay.io/ceph/ceph:v18, name=adoring_cerf, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:26:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v104: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea58f20aee5b31db6b649827cf11c65eb90c8c8e59231c8612530689f3d77d9e-merged.mount: Deactivated successfully.
Jan 21 23:26:10 compute-0 podman[88782]: 2026-01-21 23:26:10.50064785 +0000 UTC m=+1.693788350 container remove 5810a7e3a7d32562fcc2dbfd96f26a1033bbc5c2aa68a7cdc4c0d5cedc142d42 (image=quay.io/ceph/ceph:v18, name=adoring_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 23:26:10 compute-0 systemd[1]: libpod-conmon-5810a7e3a7d32562fcc2dbfd96f26a1033bbc5c2aa68a7cdc4c0d5cedc142d42.scope: Deactivated successfully.
Jan 21 23:26:10 compute-0 sudo[88772]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Jan 21 23:26:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 21 23:26:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:26:11 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:11 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Jan 21 23:26:11 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Jan 21 23:26:11 compute-0 python3[88938]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 23:26:11 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 21 23:26:11 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 21 23:26:11 compute-0 ceph-mon[74318]: 2.12 deep-scrub starts
Jan 21 23:26:11 compute-0 ceph-mon[74318]: 2.12 deep-scrub ok
Jan 21 23:26:11 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3377259321' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 21 23:26:11 compute-0 ceph-mon[74318]: osdmap e35: 3 total, 2 up, 3 in
Jan 21 23:26:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:11 compute-0 ceph-mon[74318]: pgmap v104: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 21 23:26:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:11 compute-0 python3[89009]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769037971.1624897-37376-225447978760578/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:26:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:26:12 compute-0 sudo[89109]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piumdxgpuvisiwlxwemuvtewkqeejnte ; /usr/bin/python3'
Jan 21 23:26:12 compute-0 sudo[89109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v105: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:12 compute-0 python3[89111]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 23:26:12 compute-0 sudo[89109]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:12 compute-0 ceph-mon[74318]: Deploying daemon osd.2 on compute-2
Jan 21 23:26:12 compute-0 ceph-mon[74318]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 21 23:26:12 compute-0 ceph-mon[74318]: Cluster is now healthy
Jan 21 23:26:12 compute-0 sudo[89184]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpazxmhzhvzgwtolmomemrcuzhizadwt ; /usr/bin/python3'
Jan 21 23:26:12 compute-0 sudo[89184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:12 compute-0 python3[89186]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769037972.172999-37390-1058243513852/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=1136e1cc24024ffa7387c2c8a059c94330f98c0c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:26:12 compute-0 sudo[89184]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:13 compute-0 sudo[89234]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwmdfrgalykbtndvuljoqsgmscgqvpyr ; /usr/bin/python3'
Jan 21 23:26:13 compute-0 sudo[89234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:13 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 21 23:26:13 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 21 23:26:13 compute-0 python3[89236]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:13 compute-0 podman[89237]: 2026-01-21 23:26:13.413101965 +0000 UTC m=+0.053012512 container create 4300b7dcaddb26825d686c9f1af003dcd1d1e8cdf6d0b9d1284fefa0be903e77 (image=quay.io/ceph/ceph:v18, name=cool_ride, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 21 23:26:13 compute-0 systemd[1]: Started libpod-conmon-4300b7dcaddb26825d686c9f1af003dcd1d1e8cdf6d0b9d1284fefa0be903e77.scope.
Jan 21 23:26:13 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233d915f1e8d17a39dcf6c2e22cbad48a466d19d0bb923eb01045c02554ad8d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233d915f1e8d17a39dcf6c2e22cbad48a466d19d0bb923eb01045c02554ad8d7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233d915f1e8d17a39dcf6c2e22cbad48a466d19d0bb923eb01045c02554ad8d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:13 compute-0 podman[89237]: 2026-01-21 23:26:13.393448855 +0000 UTC m=+0.033359402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:13 compute-0 podman[89237]: 2026-01-21 23:26:13.510378895 +0000 UTC m=+0.150289442 container init 4300b7dcaddb26825d686c9f1af003dcd1d1e8cdf6d0b9d1284fefa0be903e77 (image=quay.io/ceph/ceph:v18, name=cool_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 23:26:13 compute-0 podman[89237]: 2026-01-21 23:26:13.516132498 +0000 UTC m=+0.156043045 container start 4300b7dcaddb26825d686c9f1af003dcd1d1e8cdf6d0b9d1284fefa0be903e77 (image=quay.io/ceph/ceph:v18, name=cool_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:26:13 compute-0 podman[89237]: 2026-01-21 23:26:13.520077277 +0000 UTC m=+0.159987814 container attach 4300b7dcaddb26825d686c9f1af003dcd1d1e8cdf6d0b9d1284fefa0be903e77 (image=quay.io/ceph/ceph:v18, name=cool_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:13 compute-0 ceph-mon[74318]: 3.7 scrub starts
Jan 21 23:26:13 compute-0 ceph-mon[74318]: 3.7 scrub ok
Jan 21 23:26:13 compute-0 ceph-mon[74318]: pgmap v105: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 21 23:26:14 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/354996813' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 21 23:26:14 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/354996813' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 21 23:26:14 compute-0 cool_ride[89252]: 
Jan 21 23:26:14 compute-0 cool_ride[89252]: [global]
Jan 21 23:26:14 compute-0 cool_ride[89252]:         fsid = 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:26:14 compute-0 cool_ride[89252]:         mon_host = 192.168.122.100
Jan 21 23:26:14 compute-0 systemd[1]: libpod-4300b7dcaddb26825d686c9f1af003dcd1d1e8cdf6d0b9d1284fefa0be903e77.scope: Deactivated successfully.
Jan 21 23:26:14 compute-0 podman[89237]: 2026-01-21 23:26:14.093165381 +0000 UTC m=+0.733075888 container died 4300b7dcaddb26825d686c9f1af003dcd1d1e8cdf6d0b9d1284fefa0be903e77 (image=quay.io/ceph/ceph:v18, name=cool_ride, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:26:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-233d915f1e8d17a39dcf6c2e22cbad48a466d19d0bb923eb01045c02554ad8d7-merged.mount: Deactivated successfully.
Jan 21 23:26:14 compute-0 podman[89237]: 2026-01-21 23:26:14.139013758 +0000 UTC m=+0.778924265 container remove 4300b7dcaddb26825d686c9f1af003dcd1d1e8cdf6d0b9d1284fefa0be903e77 (image=quay.io/ceph/ceph:v18, name=cool_ride, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 23:26:14 compute-0 systemd[1]: libpod-conmon-4300b7dcaddb26825d686c9f1af003dcd1d1e8cdf6d0b9d1284fefa0be903e77.scope: Deactivated successfully.
Jan 21 23:26:14 compute-0 sudo[89234]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:14 compute-0 sudo[89315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kktecherigzajcbscojihfvqvgzlkipx ; /usr/bin/python3'
Jan 21 23:26:14 compute-0 sudo[89315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:14 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uvjsro started
Jan 21 23:26:14 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mgr.compute-2.uvjsro 192.168.122.102:0/3828277504; not ready for session (expect reconnect)
Jan 21 23:26:14 compute-0 python3[89317]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v106: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:14 compute-0 podman[89318]: 2026-01-21 23:26:14.528752509 +0000 UTC m=+0.040747275 container create 17fd62a6eb8a8b5e3c57672eb1168d0bc846f2016ee055b3fd091395a53b790b (image=quay.io/ceph/ceph:v18, name=peaceful_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 23:26:14 compute-0 systemd[1]: Started libpod-conmon-17fd62a6eb8a8b5e3c57672eb1168d0bc846f2016ee055b3fd091395a53b790b.scope.
Jan 21 23:26:14 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14670cdfee5306a25cc947af8661a2fee776c87850f617d4a9f449e3be1b183d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14670cdfee5306a25cc947af8661a2fee776c87850f617d4a9f449e3be1b183d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14670cdfee5306a25cc947af8661a2fee776c87850f617d4a9f449e3be1b183d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:14 compute-0 podman[89318]: 2026-01-21 23:26:14.596161483 +0000 UTC m=+0.108156239 container init 17fd62a6eb8a8b5e3c57672eb1168d0bc846f2016ee055b3fd091395a53b790b (image=quay.io/ceph/ceph:v18, name=peaceful_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 23:26:14 compute-0 podman[89318]: 2026-01-21 23:26:14.60273639 +0000 UTC m=+0.114731126 container start 17fd62a6eb8a8b5e3c57672eb1168d0bc846f2016ee055b3fd091395a53b790b (image=quay.io/ceph/ceph:v18, name=peaceful_lamarr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:26:14 compute-0 podman[89318]: 2026-01-21 23:26:14.605714539 +0000 UTC m=+0.117709275 container attach 17fd62a6eb8a8b5e3c57672eb1168d0bc846f2016ee055b3fd091395a53b790b (image=quay.io/ceph/ceph:v18, name=peaceful_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 21 23:26:14 compute-0 podman[89318]: 2026-01-21 23:26:14.510872382 +0000 UTC m=+0.022867138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:14 compute-0 ceph-mon[74318]: 3.8 deep-scrub starts
Jan 21 23:26:14 compute-0 ceph-mon[74318]: 2.14 scrub starts
Jan 21 23:26:14 compute-0 ceph-mon[74318]: 3.8 deep-scrub ok
Jan 21 23:26:14 compute-0 ceph-mon[74318]: 2.14 scrub ok
Jan 21 23:26:14 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/354996813' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 21 23:26:14 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/354996813' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 21 23:26:14 compute-0 ceph-mon[74318]: Standby manager daemon compute-2.uvjsro started
Jan 21 23:26:15 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.boqcsl(active, since 2m), standbys: compute-2.uvjsro
Jan 21 23:26:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.uvjsro", "id": "compute-2.uvjsro"} v 0) v1
Jan 21 23:26:15 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uvjsro", "id": "compute-2.uvjsro"}]: dispatch
Jan 21 23:26:15 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 21 23:26:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Jan 21 23:26:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/451102614' entity='client.admin' 
Jan 21 23:26:15 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 21 23:26:15 compute-0 peaceful_lamarr[89333]: set ssl_option
Jan 21 23:26:15 compute-0 systemd[1]: libpod-17fd62a6eb8a8b5e3c57672eb1168d0bc846f2016ee055b3fd091395a53b790b.scope: Deactivated successfully.
Jan 21 23:26:15 compute-0 podman[89318]: 2026-01-21 23:26:15.273123856 +0000 UTC m=+0.785118592 container died 17fd62a6eb8a8b5e3c57672eb1168d0bc846f2016ee055b3fd091395a53b790b (image=quay.io/ceph/ceph:v18, name=peaceful_lamarr, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 23:26:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-14670cdfee5306a25cc947af8661a2fee776c87850f617d4a9f449e3be1b183d-merged.mount: Deactivated successfully.
Jan 21 23:26:15 compute-0 podman[89318]: 2026-01-21 23:26:15.315692194 +0000 UTC m=+0.827686940 container remove 17fd62a6eb8a8b5e3c57672eb1168d0bc846f2016ee055b3fd091395a53b790b (image=quay.io/ceph/ceph:v18, name=peaceful_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:26:15 compute-0 systemd[1]: libpod-conmon-17fd62a6eb8a8b5e3c57672eb1168d0bc846f2016ee055b3fd091395a53b790b.scope: Deactivated successfully.
Jan 21 23:26:15 compute-0 sudo[89315]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:15 compute-0 sudo[89393]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvldtsemcqywllxxaerawwhanznkwfrl ; /usr/bin/python3'
Jan 21 23:26:15 compute-0 sudo[89393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:15 compute-0 python3[89395]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:15 compute-0 ceph-mon[74318]: 3.b scrub starts
Jan 21 23:26:15 compute-0 ceph-mon[74318]: 3.b scrub ok
Jan 21 23:26:15 compute-0 ceph-mon[74318]: pgmap v106: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:15 compute-0 ceph-mon[74318]: mgrmap e9: compute-0.boqcsl(active, since 2m), standbys: compute-2.uvjsro
Jan 21 23:26:15 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uvjsro", "id": "compute-2.uvjsro"}]: dispatch
Jan 21 23:26:15 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/451102614' entity='client.admin' 
Jan 21 23:26:15 compute-0 podman[89396]: 2026-01-21 23:26:15.68828312 +0000 UTC m=+0.046374454 container create 488003145bc0894dac9bd0ef97a5081864127e28d5bda4bb18ad858098bd6818 (image=quay.io/ceph/ceph:v18, name=agitated_murdock, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 23:26:15 compute-0 systemd[1]: Started libpod-conmon-488003145bc0894dac9bd0ef97a5081864127e28d5bda4bb18ad858098bd6818.scope.
Jan 21 23:26:15 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f2027ae23a96be900b0efd087223500c9240f80471d98e1c70f923220ff7bf7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f2027ae23a96be900b0efd087223500c9240f80471d98e1c70f923220ff7bf7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f2027ae23a96be900b0efd087223500c9240f80471d98e1c70f923220ff7bf7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:15 compute-0 podman[89396]: 2026-01-21 23:26:15.671231358 +0000 UTC m=+0.029322702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:15 compute-0 podman[89396]: 2026-01-21 23:26:15.791253061 +0000 UTC m=+0.149344495 container init 488003145bc0894dac9bd0ef97a5081864127e28d5bda4bb18ad858098bd6818 (image=quay.io/ceph/ceph:v18, name=agitated_murdock, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:26:15 compute-0 podman[89396]: 2026-01-21 23:26:15.798327743 +0000 UTC m=+0.156419067 container start 488003145bc0894dac9bd0ef97a5081864127e28d5bda4bb18ad858098bd6818 (image=quay.io/ceph/ceph:v18, name=agitated_murdock, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 21 23:26:15 compute-0 podman[89396]: 2026-01-21 23:26:15.802840539 +0000 UTC m=+0.160931903 container attach 488003145bc0894dac9bd0ef97a5081864127e28d5bda4bb18ad858098bd6818 (image=quay.io/ceph/ceph:v18, name=agitated_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 21 23:26:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:26:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:26:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:16 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 21 23:26:16 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 21 23:26:16 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14280 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:26:16 compute-0 ceph-mgr[74614]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 21 23:26:16 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 21 23:26:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 21 23:26:16 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:16 compute-0 ceph-mgr[74614]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Jan 21 23:26:16 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Jan 21 23:26:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 21 23:26:16 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:16 compute-0 agitated_murdock[89411]: Scheduled rgw.rgw update...
Jan 21 23:26:16 compute-0 agitated_murdock[89411]: Scheduled ingress.rgw.default update...
Jan 21 23:26:16 compute-0 systemd[1]: libpod-488003145bc0894dac9bd0ef97a5081864127e28d5bda4bb18ad858098bd6818.scope: Deactivated successfully.
Jan 21 23:26:16 compute-0 podman[89396]: 2026-01-21 23:26:16.387815891 +0000 UTC m=+0.745907215 container died 488003145bc0894dac9bd0ef97a5081864127e28d5bda4bb18ad858098bd6818 (image=quay.io/ceph/ceph:v18, name=agitated_murdock, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 21 23:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f2027ae23a96be900b0efd087223500c9240f80471d98e1c70f923220ff7bf7-merged.mount: Deactivated successfully.
Jan 21 23:26:16 compute-0 podman[89396]: 2026-01-21 23:26:16.430874033 +0000 UTC m=+0.788965357 container remove 488003145bc0894dac9bd0ef97a5081864127e28d5bda4bb18ad858098bd6818 (image=quay.io/ceph/ceph:v18, name=agitated_murdock, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:16 compute-0 systemd[1]: libpod-conmon-488003145bc0894dac9bd0ef97a5081864127e28d5bda4bb18ad858098bd6818.scope: Deactivated successfully.
Jan 21 23:26:16 compute-0 sudo[89393]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v107: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:16 compute-0 ceph-mon[74318]: 2.16 scrub starts
Jan 21 23:26:16 compute-0 ceph-mon[74318]: 2.16 scrub ok
Jan 21 23:26:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:26:17 compute-0 python3[89521]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 23:26:17 compute-0 python3[89592]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769037977.2043295-37431-169667822430337/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:26:18 compute-0 ceph-mon[74318]: 2.17 scrub starts
Jan 21 23:26:18 compute-0 ceph-mon[74318]: 2.17 scrub ok
Jan 21 23:26:18 compute-0 ceph-mon[74318]: from='client.14280 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:26:18 compute-0 ceph-mon[74318]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 21 23:26:18 compute-0 ceph-mon[74318]: Saving service ingress.rgw.default spec with placement count:2
Jan 21 23:26:18 compute-0 ceph-mon[74318]: pgmap v107: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Jan 21 23:26:18 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 21 23:26:18 compute-0 sudo[89640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrkntbjprnrwiebeckfmusplfvylnxui ; /usr/bin/python3'
Jan 21 23:26:18 compute-0 sudo[89640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:26:18 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:26:18 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:18 compute-0 sudo[89643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:18 compute-0 sudo[89643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:18 compute-0 python3[89642]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:18 compute-0 sudo[89643]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v108: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:18 compute-0 podman[89668]: 2026-01-21 23:26:18.498701453 +0000 UTC m=+0.045264890 container create 53941add95bd8a2748a96930202f77f7f6e48b08377e1917ecd0184003c2fee2 (image=quay.io/ceph/ceph:v18, name=kind_hugle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 23:26:18 compute-0 sudo[89669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:26:18 compute-0 systemd[1]: Started libpod-conmon-53941add95bd8a2748a96930202f77f7f6e48b08377e1917ecd0184003c2fee2.scope.
Jan 21 23:26:18 compute-0 sudo[89669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:18 compute-0 sudo[89669]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:18 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f3a003a528bed34b4835a3296828077d657d992d85a2e294b37b0ecb36795da/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f3a003a528bed34b4835a3296828077d657d992d85a2e294b37b0ecb36795da/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f3a003a528bed34b4835a3296828077d657d992d85a2e294b37b0ecb36795da/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:18 compute-0 podman[89668]: 2026-01-21 23:26:18.483540707 +0000 UTC m=+0.030104164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:18 compute-0 podman[89668]: 2026-01-21 23:26:18.577603331 +0000 UTC m=+0.124166818 container init 53941add95bd8a2748a96930202f77f7f6e48b08377e1917ecd0184003c2fee2 (image=quay.io/ceph/ceph:v18, name=kind_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:26:18 compute-0 podman[89668]: 2026-01-21 23:26:18.583656992 +0000 UTC m=+0.130220429 container start 53941add95bd8a2748a96930202f77f7f6e48b08377e1917ecd0184003c2fee2 (image=quay.io/ceph/ceph:v18, name=kind_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:26:18 compute-0 podman[89668]: 2026-01-21 23:26:18.587971133 +0000 UTC m=+0.134534580 container attach 53941add95bd8a2748a96930202f77f7f6e48b08377e1917ecd0184003c2fee2 (image=quay.io/ceph/ceph:v18, name=kind_hugle, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:26:18 compute-0 sudo[89714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:18 compute-0 sudo[89714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:18 compute-0 sudo[89714]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:19 compute-0 sudo[89756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:26:19 compute-0 sudo[89756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:19 compute-0 sudo[89756]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 21 23:26:19 compute-0 ceph-mon[74318]: from='osd.2 [v2:192.168.122.102:6800/3484655089,v1:192.168.122.102:6801/3484655089]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 21 23:26:19 compute-0 ceph-mon[74318]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 21 23:26:19 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:19 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 21 23:26:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e36 e36: 3 total, 2 up, 3 in
Jan 21 23:26:19 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 2 up, 3 in
Jan 21 23:26:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 21 23:26:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:19 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 23:26:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Jan 21 23:26:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 21 23:26:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e36 create-or-move crush item name 'osd.2' initial_weight 0.0068 at location {host=compute-2,root=default}
Jan 21 23:26:19 compute-0 sudo[89781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:19 compute-0 sudo[89781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:19 compute-0 sudo[89781]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:19 compute-0 sudo[89806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:26:19 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14286 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:26:19 compute-0 ceph-mgr[74614]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 21 23:26:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Jan 21 23:26:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 21 23:26:19 compute-0 sudo[89806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Jan 21 23:26:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 21 23:26:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Jan 21 23:26:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 21 23:26:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 21 23:26:19 compute-0 ceph-mon[74318]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 21 23:26:19 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 21 23:26:19 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0[74314]: 2026-01-21T23:26:19.154+0000 7f48ba39b640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 21 23:26:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 21 23:26:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e2 new map
Jan 21 23:26:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-21T23:26:19.155977+0000
                                           modified        2026-01-21T23:26:19.156015+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Jan 21 23:26:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 21 23:26:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e37 e37: 3 total, 2 up, 3 in
Jan 21 23:26:19 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 2 up, 3 in
Jan 21 23:26:19 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[4.1f( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.080377579s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 85.208236694s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 21 23:26:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[2.18( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=15.576691628s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 85.704589844s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[4.1f( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.080377579s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208236694s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[2.18( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=15.576691628s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.704589844s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[3.1a( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.074159622s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 85.202201843s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[3.1a( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.074159622s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.202201843s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[3.15( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.080066681s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 85.208251953s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[3.15( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.080066681s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208251953s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[4.15( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.080018997s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 85.208282471s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[4.15( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.080018997s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208282471s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[2.12( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=15.576162338s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 85.704582214s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[2.12( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=15.576162338s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.704582214s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[3.11( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.079904556s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 85.208419800s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[3.11( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.079904556s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208419800s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[4.9( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.079800606s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 85.208404541s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[4.9( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.079800606s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208404541s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[2.f( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=15.575814247s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 85.704460144s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[2.f( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=15.575814247s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.704460144s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[3.e( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.079648972s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 85.208419800s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[3.e( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.079648972s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208419800s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[4.8( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.079567909s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 85.208435059s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[2.5( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=15.575089455s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 85.704002380s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[4.8( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.079567909s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208435059s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[4.1( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.079538345s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 85.208587646s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[4.1( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.079538345s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208587646s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[3.9( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.079378128s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 85.208518982s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-mgr[74614]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 23:26:19 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[3.9( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.079378128s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208518982s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[2.b( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=15.574801445s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 85.703994751s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[2.b( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=15.574801445s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.703994751s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[2.1c( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=15.571154594s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 85.700416565s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[3.1d( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.079336166s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 active pruub 85.208610535s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[2.1c( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=15.571154594s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.700416565s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[3.1d( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=37 pruub=15.079336166s) [] r=-1 lpr=37 pi=[30,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208610535s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[2.1d( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=15.571048737s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 85.700393677s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[2.1d( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=15.571048737s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.700393677s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 37 pg[2.5( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=15.575089455s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.704002380s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:19 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3484655089; not ready for session (expect reconnect)
Jan 21 23:26:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 21 23:26:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:19 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 23:26:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:19 compute-0 ceph-mgr[74614]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 21 23:26:19 compute-0 systemd[1]: libpod-53941add95bd8a2748a96930202f77f7f6e48b08377e1917ecd0184003c2fee2.scope: Deactivated successfully.
Jan 21 23:26:19 compute-0 podman[89668]: 2026-01-21 23:26:19.212865333 +0000 UTC m=+0.759428780 container died 53941add95bd8a2748a96930202f77f7f6e48b08377e1917ecd0184003c2fee2 (image=quay.io/ceph/ceph:v18, name=kind_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f3a003a528bed34b4835a3296828077d657d992d85a2e294b37b0ecb36795da-merged.mount: Deactivated successfully.
Jan 21 23:26:19 compute-0 podman[89668]: 2026-01-21 23:26:19.264695498 +0000 UTC m=+0.811258925 container remove 53941add95bd8a2748a96930202f77f7f6e48b08377e1917ecd0184003c2fee2 (image=quay.io/ceph/ceph:v18, name=kind_hugle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:19 compute-0 systemd[1]: libpod-conmon-53941add95bd8a2748a96930202f77f7f6e48b08377e1917ecd0184003c2fee2.scope: Deactivated successfully.
Jan 21 23:26:19 compute-0 sudo[89640]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:19 compute-0 sudo[89885]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwndddgsphgheotdkqzjjinkrexqfftp ; /usr/bin/python3'
Jan 21 23:26:19 compute-0 sudo[89885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:19 compute-0 python3[89889]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:19 compute-0 sudo[89806]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:19 compute-0 podman[89904]: 2026-01-21 23:26:19.636541442 +0000 UTC m=+0.045134276 container create a5737248f38333c1483e3310626ebe1a822a99c4c25d909c0cc0a9eb1c2aa7f8 (image=quay.io/ceph/ceph:v18, name=awesome_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 23:26:19 compute-0 systemd[1]: Started libpod-conmon-a5737248f38333c1483e3310626ebe1a822a99c4c25d909c0cc0a9eb1c2aa7f8.scope.
Jan 21 23:26:19 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f1a7671b92128d79bd09a3c756ae4be7b0703d8a6c276c2a1900f679cfadf3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f1a7671b92128d79bd09a3c756ae4be7b0703d8a6c276c2a1900f679cfadf3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f1a7671b92128d79bd09a3c756ae4be7b0703d8a6c276c2a1900f679cfadf3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:19 compute-0 podman[89904]: 2026-01-21 23:26:19.617239352 +0000 UTC m=+0.025832206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:19 compute-0 podman[89904]: 2026-01-21 23:26:19.730956386 +0000 UTC m=+0.139549320 container init a5737248f38333c1483e3310626ebe1a822a99c4c25d909c0cc0a9eb1c2aa7f8 (image=quay.io/ceph/ceph:v18, name=awesome_ptolemy, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:26:19 compute-0 podman[89904]: 2026-01-21 23:26:19.738505203 +0000 UTC m=+0.147098087 container start a5737248f38333c1483e3310626ebe1a822a99c4c25d909c0cc0a9eb1c2aa7f8 (image=quay.io/ceph/ceph:v18, name=awesome_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:19 compute-0 podman[89904]: 2026-01-21 23:26:19.742021939 +0000 UTC m=+0.150614823 container attach a5737248f38333c1483e3310626ebe1a822a99c4c25d909c0cc0a9eb1c2aa7f8 (image=quay.io/ceph/ceph:v18, name=awesome_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:26:20 compute-0 ceph-mon[74318]: pgmap v108: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:20 compute-0 ceph-mon[74318]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 21 23:26:20 compute-0 ceph-mon[74318]: osdmap e36: 3 total, 2 up, 3 in
Jan 21 23:26:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:20 compute-0 ceph-mon[74318]: from='osd.2 [v2:192.168.122.102:6800/3484655089,v1:192.168.122.102:6801/3484655089]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 21 23:26:20 compute-0 ceph-mon[74318]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 21 23:26:20 compute-0 ceph-mon[74318]: from='client.14286 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:26:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 21 23:26:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 21 23:26:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 21 23:26:20 compute-0 ceph-mon[74318]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 21 23:26:20 compute-0 ceph-mon[74318]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 21 23:26:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 21 23:26:20 compute-0 ceph-mon[74318]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 21 23:26:20 compute-0 ceph-mon[74318]: osdmap e37: 3 total, 2 up, 3 in
Jan 21 23:26:20 compute-0 ceph-mon[74318]: fsmap cephfs:0
Jan 21 23:26:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:20 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 21 23:26:20 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3484655089; not ready for session (expect reconnect)
Jan 21 23:26:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 21 23:26:20 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:20 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 23:26:20 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 21 23:26:20 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14292 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:26:20 compute-0 ceph-mgr[74614]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 23:26:20 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 23:26:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 21 23:26:20 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:20 compute-0 awesome_ptolemy[89920]: Scheduled mds.cephfs update...
Jan 21 23:26:20 compute-0 systemd[1]: libpod-a5737248f38333c1483e3310626ebe1a822a99c4c25d909c0cc0a9eb1c2aa7f8.scope: Deactivated successfully.
Jan 21 23:26:20 compute-0 podman[89904]: 2026-01-21 23:26:20.345381712 +0000 UTC m=+0.753974596 container died a5737248f38333c1483e3310626ebe1a822a99c4c25d909c0cc0a9eb1c2aa7f8 (image=quay.io/ceph/ceph:v18, name=awesome_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:26:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7f1a7671b92128d79bd09a3c756ae4be7b0703d8a6c276c2a1900f679cfadf3-merged.mount: Deactivated successfully.
Jan 21 23:26:20 compute-0 podman[89904]: 2026-01-21 23:26:20.409370573 +0000 UTC m=+0.817963427 container remove a5737248f38333c1483e3310626ebe1a822a99c4c25d909c0cc0a9eb1c2aa7f8 (image=quay.io/ceph/ceph:v18, name=awesome_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:26:20 compute-0 systemd[1]: libpod-conmon-a5737248f38333c1483e3310626ebe1a822a99c4c25d909c0cc0a9eb1c2aa7f8.scope: Deactivated successfully.
Jan 21 23:26:20 compute-0 sudo[89885]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v111: 100 pgs: 100 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:26:20 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:26:20 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:26:20 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:26:20 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.ihmngr started
Jan 21 23:26:20 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mgr.compute-1.ihmngr 192.168.122.101:0/3652704244; not ready for session (expect reconnect)
Jan 21 23:26:20 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:21 compute-0 ceph-mon[74318]: purged_snaps scrub starts
Jan 21 23:26:21 compute-0 ceph-mon[74318]: purged_snaps scrub ok
Jan 21 23:26:21 compute-0 ceph-mon[74318]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 23:26:21 compute-0 ceph-mon[74318]: 2.1a scrub starts
Jan 21 23:26:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:21 compute-0 ceph-mon[74318]: 2.1a scrub ok
Jan 21 23:26:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:21 compute-0 ceph-mon[74318]: Standby manager daemon compute-1.ihmngr started
Jan 21 23:26:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:21 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3484655089; not ready for session (expect reconnect)
Jan 21 23:26:21 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 21 23:26:21 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:21 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 23:26:21 compute-0 sudo[90034]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkufhnqolayodoutvwhuhcddjfxnvkel ; /usr/bin/python3'
Jan 21 23:26:21 compute-0 sudo[90034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:21 compute-0 python3[90036]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 23:26:21 compute-0 sudo[90034]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:21 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from mgr.compute-1.ihmngr 192.168.122.101:0/3652704244; not ready for session (expect reconnect)
Jan 21 23:26:21 compute-0 sudo[90107]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgzciqqyxwduuevfagocwjqtxotnglnn ; /usr/bin/python3'
Jan 21 23:26:21 compute-0 sudo[90107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:21 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.boqcsl(active, since 2m), standbys: compute-2.uvjsro, compute-1.ihmngr
Jan 21 23:26:21 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.ihmngr", "id": "compute-1.ihmngr"} v 0) v1
Jan 21 23:26:21 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr metadata", "who": "compute-1.ihmngr", "id": "compute-1.ihmngr"}]: dispatch
Jan 21 23:26:21 compute-0 python3[90109]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769037981.2682626-37483-263438393278633/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=f25b484d050c82fa53bbf5f0ee2ad75e8c75c1da backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:26:21 compute-0 sudo[90107]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:26:22 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3484655089; not ready for session (expect reconnect)
Jan 21 23:26:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 21 23:26:22 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:22 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 23:26:22 compute-0 ceph-mon[74318]: from='client.14292 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 23:26:22 compute-0 ceph-mon[74318]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 21 23:26:22 compute-0 ceph-mon[74318]: pgmap v111: 100 pgs: 100 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:22 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:22 compute-0 ceph-mon[74318]: 3.12 scrub starts
Jan 21 23:26:22 compute-0 ceph-mon[74318]: 3.12 scrub ok
Jan 21 23:26:22 compute-0 ceph-mon[74318]: mgrmap e10: compute-0.boqcsl(active, since 2m), standbys: compute-2.uvjsro, compute-1.ihmngr
Jan 21 23:26:22 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr metadata", "who": "compute-1.ihmngr", "id": "compute-1.ihmngr"}]: dispatch
Jan 21 23:26:22 compute-0 sudo[90157]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvuymqicczzbuqzjfffvghnrsjbsknbf ; /usr/bin/python3'
Jan 21 23:26:22 compute-0 sudo[90157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v112: 100 pgs: 100 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:22 compute-0 python3[90159]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:22 compute-0 podman[90160]: 2026-01-21 23:26:22.610061911 +0000 UTC m=+0.049934530 container create 21a5ea8d8b9aa75c88f2db1e11eb521c82f6311f879f0b81ac97b576cfe6f43a (image=quay.io/ceph/ceph:v18, name=hopeful_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:26:22 compute-0 systemd[1]: Started libpod-conmon-21a5ea8d8b9aa75c88f2db1e11eb521c82f6311f879f0b81ac97b576cfe6f43a.scope.
Jan 21 23:26:22 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c0a25d343d9f2b39abdd448c3a75453e0f54442c934af35e5b823ada49c2f3b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c0a25d343d9f2b39abdd448c3a75453e0f54442c934af35e5b823ada49c2f3b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:22 compute-0 podman[90160]: 2026-01-21 23:26:22.589323788 +0000 UTC m=+0.029196427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:22 compute-0 podman[90160]: 2026-01-21 23:26:22.736827897 +0000 UTC m=+0.176700576 container init 21a5ea8d8b9aa75c88f2db1e11eb521c82f6311f879f0b81ac97b576cfe6f43a (image=quay.io/ceph/ceph:v18, name=hopeful_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 21 23:26:22 compute-0 podman[90160]: 2026-01-21 23:26:22.745518798 +0000 UTC m=+0.185391457 container start 21a5ea8d8b9aa75c88f2db1e11eb521c82f6311f879f0b81ac97b576cfe6f43a (image=quay.io/ceph/ceph:v18, name=hopeful_rosalind, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 23:26:22 compute-0 podman[90160]: 2026-01-21 23:26:22.749694873 +0000 UTC m=+0.189567512 container attach 21a5ea8d8b9aa75c88f2db1e11eb521c82f6311f879f0b81ac97b576cfe6f43a (image=quay.io/ceph/ceph:v18, name=hopeful_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:23 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3484655089; not ready for session (expect reconnect)
Jan 21 23:26:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 21 23:26:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:23 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 23:26:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:23 compute-0 ceph-mon[74318]: pgmap v112: 100 pgs: 100 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0) v1
Jan 21 23:26:23 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3093107474' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 21 23:26:23 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3093107474' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 21 23:26:23 compute-0 systemd[1]: libpod-21a5ea8d8b9aa75c88f2db1e11eb521c82f6311f879f0b81ac97b576cfe6f43a.scope: Deactivated successfully.
Jan 21 23:26:23 compute-0 conmon[90175]: conmon 21a5ea8d8b9aa75c88f2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21a5ea8d8b9aa75c88f2db1e11eb521c82f6311f879f0b81ac97b576cfe6f43a.scope/container/memory.events
Jan 21 23:26:23 compute-0 podman[90200]: 2026-01-21 23:26:23.695083506 +0000 UTC m=+0.040269391 container died 21a5ea8d8b9aa75c88f2db1e11eb521c82f6311f879f0b81ac97b576cfe6f43a (image=quay.io/ceph/ceph:v18, name=hopeful_rosalind, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Jan 21 23:26:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c0a25d343d9f2b39abdd448c3a75453e0f54442c934af35e5b823ada49c2f3b-merged.mount: Deactivated successfully.
Jan 21 23:26:23 compute-0 podman[90200]: 2026-01-21 23:26:23.840650416 +0000 UTC m=+0.185836301 container remove 21a5ea8d8b9aa75c88f2db1e11eb521c82f6311f879f0b81ac97b576cfe6f43a (image=quay.io/ceph/ceph:v18, name=hopeful_rosalind, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:26:23 compute-0 systemd[1]: libpod-conmon-21a5ea8d8b9aa75c88f2db1e11eb521c82f6311f879f0b81ac97b576cfe6f43a.scope: Deactivated successfully.
Jan 21 23:26:23 compute-0 sudo[90157]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:24 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3484655089; not ready for session (expect reconnect)
Jan 21 23:26:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 21 23:26:24 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:24 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 23:26:24 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 21 23:26:24 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 21 23:26:24 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3093107474' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 21 23:26:24 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3093107474' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 21 23:26:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v113: 100 pgs: 100 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:24 compute-0 sudo[90238]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awepnjmztcrrowzlfpyfwreqpcpdorsk ; /usr/bin/python3'
Jan 21 23:26:24 compute-0 sudo[90238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:24 compute-0 python3[90240]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:24 compute-0 podman[90242]: 2026-01-21 23:26:24.673276299 +0000 UTC m=+0.046765598 container create 96e4674019c021bbafe0cdd0c50f8a6441c32f97d04b0eb24b81e17e448dabf2 (image=quay.io/ceph/ceph:v18, name=laughing_agnesi, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:26:24 compute-0 systemd[1]: Started libpod-conmon-96e4674019c021bbafe0cdd0c50f8a6441c32f97d04b0eb24b81e17e448dabf2.scope.
Jan 21 23:26:24 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48deef7b0c8f7729982f1ec9e7f06505443410a485bb960326eba12b25c309bd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48deef7b0c8f7729982f1ec9e7f06505443410a485bb960326eba12b25c309bd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:24 compute-0 podman[90242]: 2026-01-21 23:26:24.742583232 +0000 UTC m=+0.116072551 container init 96e4674019c021bbafe0cdd0c50f8a6441c32f97d04b0eb24b81e17e448dabf2 (image=quay.io/ceph/ceph:v18, name=laughing_agnesi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:24 compute-0 podman[90242]: 2026-01-21 23:26:24.650845895 +0000 UTC m=+0.024335204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:24 compute-0 podman[90242]: 2026-01-21 23:26:24.75015202 +0000 UTC m=+0.123641309 container start 96e4674019c021bbafe0cdd0c50f8a6441c32f97d04b0eb24b81e17e448dabf2 (image=quay.io/ceph/ceph:v18, name=laughing_agnesi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 21 23:26:24 compute-0 podman[90242]: 2026-01-21 23:26:24.756026693 +0000 UTC m=+0.129516012 container attach 96e4674019c021bbafe0cdd0c50f8a6441c32f97d04b0eb24b81e17e448dabf2 (image=quay.io/ceph/ceph:v18, name=laughing_agnesi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 21 23:26:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:26:24 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:26:24 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Jan 21 23:26:24 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 21 23:26:24 compute-0 ceph-mgr[74614]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Jan 21 23:26:24 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Jan 21 23:26:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 21 23:26:24 compute-0 ceph-mgr[74614]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 21 23:26:24 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 21 23:26:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:26:24 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:26:24 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:26:24 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 21 23:26:24 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 21 23:26:24 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 21 23:26:24 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 21 23:26:24 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 21 23:26:24 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 21 23:26:25 compute-0 sudo[90262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:25 compute-0 sudo[90262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:25 compute-0 sudo[90262]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:25 compute-0 sudo[90287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 21 23:26:25 compute-0 sudo[90287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:25 compute-0 sudo[90287]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:25 compute-0 sudo[90331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:25 compute-0 sudo[90331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:25 compute-0 sudo[90331]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:25 compute-0 ceph-mgr[74614]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3484655089; not ready for session (expect reconnect)
Jan 21 23:26:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 21 23:26:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:25 compute-0 ceph-mgr[74614]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 23:26:25 compute-0 sudo[90356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph
Jan 21 23:26:25 compute-0 sudo[90356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:25 compute-0 sudo[90356]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:25 compute-0 sudo[90381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:25 compute-0 sudo[90381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:25 compute-0 sudo[90381]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:25 compute-0 sudo[90406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph/ceph.conf.new
Jan 21 23:26:25 compute-0 sudo[90406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:25 compute-0 sudo[90406]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 21 23:26:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3352172427' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 21 23:26:25 compute-0 laughing_agnesi[90258]: 
Jan 21 23:26:25 compute-0 laughing_agnesi[90258]: {"fsid":"3759241a-7f1c-520d-ba17-879943ee2f00","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":32,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":37,"num_osds":3,"num_up_osds":2,"osd_up_since":1769037918,"num_in_osds":3,"osd_in_since":1769037964,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":100}],"num_pgs":100,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":56127488,"bytes_avail":14967869440,"bytes_total":15023996928},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2026-01-21T23:26:20.476937+0000","services":{"mgr":{"daemons":{"summary":"","compute-2.uvjsro":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 21 23:26:25 compute-0 systemd[1]: libpod-96e4674019c021bbafe0cdd0c50f8a6441c32f97d04b0eb24b81e17e448dabf2.scope: Deactivated successfully.
Jan 21 23:26:25 compute-0 podman[90242]: 2026-01-21 23:26:25.38039978 +0000 UTC m=+0.753889069 container died 96e4674019c021bbafe0cdd0c50f8a6441c32f97d04b0eb24b81e17e448dabf2 (image=quay.io/ceph/ceph:v18, name=laughing_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 23:26:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-48deef7b0c8f7729982f1ec9e7f06505443410a485bb960326eba12b25c309bd-merged.mount: Deactivated successfully.
Jan 21 23:26:25 compute-0 sudo[90431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:25 compute-0 sudo[90431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:25 compute-0 sudo[90431]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:25 compute-0 podman[90242]: 2026-01-21 23:26:25.416988251 +0000 UTC m=+0.790477580 container remove 96e4674019c021bbafe0cdd0c50f8a6441c32f97d04b0eb24b81e17e448dabf2 (image=quay.io/ceph/ceph:v18, name=laughing_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:25 compute-0 systemd[1]: libpod-conmon-96e4674019c021bbafe0cdd0c50f8a6441c32f97d04b0eb24b81e17e448dabf2.scope: Deactivated successfully.
Jan 21 23:26:25 compute-0 sudo[90238]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:25 compute-0 sudo[90468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:26:25 compute-0 sudo[90468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:25 compute-0 sudo[90468]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:25 compute-0 sudo[90493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:25 compute-0 sudo[90493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:25 compute-0 sudo[90493]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:25 compute-0 sudo[90561]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbcbshycendljwdszsspielplwzhjahp ; /usr/bin/python3'
Jan 21 23:26:25 compute-0 sudo[90522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph/ceph.conf.new
Jan 21 23:26:25 compute-0 sudo[90561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:25 compute-0 sudo[90522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:25 compute-0 sudo[90522]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:25 compute-0 sudo[90592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:25 compute-0 sudo[90592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:25 compute-0 sudo[90592]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:25 compute-0 python3[90567]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:25 compute-0 sudo[90617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph/ceph.conf.new
Jan 21 23:26:25 compute-0 sudo[90617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:25 compute-0 sudo[90617]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:25 compute-0 podman[90630]: 2026-01-21 23:26:25.887174585 +0000 UTC m=+0.060123164 container create a8eb13fb6587b6bf61a4d264f4f503f5b34c57a6959f6facf245bea7dc6a4c96 (image=quay.io/ceph/ceph:v18, name=tender_yonath, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:26:25 compute-0 systemd[1]: Started libpod-conmon-a8eb13fb6587b6bf61a4d264f4f503f5b34c57a6959f6facf245bea7dc6a4c96.scope.
Jan 21 23:26:25 compute-0 sudo[90655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:25 compute-0 sudo[90655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:25 compute-0 sudo[90655]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:25 compute-0 podman[90630]: 2026-01-21 23:26:25.85812922 +0000 UTC m=+0.031077859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:25 compute-0 ceph-mon[74318]: 3.14 scrub starts
Jan 21 23:26:25 compute-0 ceph-mon[74318]: 3.14 scrub ok
Jan 21 23:26:25 compute-0 ceph-mon[74318]: 3.17 scrub starts
Jan 21 23:26:25 compute-0 ceph-mon[74318]: 3.17 scrub ok
Jan 21 23:26:25 compute-0 ceph-mon[74318]: pgmap v113: 100 pgs: 100 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 21 23:26:25 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:25 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:25 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 21 23:26:25 compute-0 ceph-mon[74318]: Adjusting osd_memory_target on compute-2 to 127.9M
Jan 21 23:26:25 compute-0 ceph-mon[74318]: Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 21 23:26:25 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:25 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:26:25 compute-0 ceph-mon[74318]: Updating compute-0:/etc/ceph/ceph.conf
Jan 21 23:26:25 compute-0 ceph-mon[74318]: Updating compute-1:/etc/ceph/ceph.conf
Jan 21 23:26:25 compute-0 ceph-mon[74318]: Updating compute-2:/etc/ceph/ceph.conf
Jan 21 23:26:25 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3352172427' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 21 23:26:25 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202b60e2d702aea4c43c07768b65ce955271d981922a79f5f0f5352af1f66df7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202b60e2d702aea4c43c07768b65ce955271d981922a79f5f0f5352af1f66df7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 21 23:26:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 21 23:26:25 compute-0 podman[90630]: 2026-01-21 23:26:25.986362357 +0000 UTC m=+0.159310936 container init a8eb13fb6587b6bf61a4d264f4f503f5b34c57a6959f6facf245bea7dc6a4c96 (image=quay.io/ceph/ceph:v18, name=tender_yonath, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 21 23:26:25 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/3484655089,v1:192.168.122.102:6801/3484655089] boot
Jan 21 23:26:25 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 21 23:26:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 21 23:26:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:25 compute-0 podman[90630]: 2026-01-21 23:26:25.997324002 +0000 UTC m=+0.170272591 container start a8eb13fb6587b6bf61a4d264f4f503f5b34c57a6959f6facf245bea7dc6a4c96 (image=quay.io/ceph/ceph:v18, name=tender_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:26:25 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.249938965s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.202201843s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[4.1f( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.255964279s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208236694s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[4.1f( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.255900383s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208236694s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.249841690s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.202201843s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[2.18( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=8.752070427s) [2] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.704589844s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[2.18( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=8.752041817s) [2] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.704589844s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.255342484s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208251953s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.255316734s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208251953s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[4.15( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.254758835s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208282471s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[4.15( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.254736900s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208282471s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[2.12( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=8.748904228s) [2] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.704582214s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[2.12( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=8.748849869s) [2] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.704582214s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.252454758s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208419800s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[2.f( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=8.748481750s) [2] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.704460144s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[4.9( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.252392769s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208404541s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[2.f( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=8.748449326s) [2] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.704460144s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[4.9( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.252358437s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208404541s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[4.8( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.252250671s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208435059s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[4.8( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.252226830s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208435059s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.252222061s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208419800s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.252106667s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208419800s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[2.5( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=8.747559547s) [2] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.704002380s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[4.1( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.252139091s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208587646s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[2.5( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=8.747541428s) [2] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.704002380s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[4.1( empty local-lis/les=30/31 n=0 ec=27/20 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.252093315s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208587646s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.251998901s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208518982s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.251969337s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208518982s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[2.b( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=8.747396469s) [2] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.703994751s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[2.b( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=8.747373581s) [2] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.703994751s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.251922607s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208610535s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[2.1c( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=8.743704796s) [2] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.700416565s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.251903534s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208610535s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[2.1d( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=8.743643761s) [2] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.700393677s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[2.1c( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=8.743677139s) [2] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.700416565s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[2.1d( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=8.743541718s) [2] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.700393677s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=30/31 n=0 ec=25/18 lis/c=30/30 les/c/f=31/31/0 sis=38 pruub=8.252419472s) [2] r=-1 lpr=38 pi=[30,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.208419800s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:26:26 compute-0 podman[90630]: 2026-01-21 23:26:26.007078146 +0000 UTC m=+0.180026715 container attach a8eb13fb6587b6bf61a4d264f4f503f5b34c57a6959f6facf245bea7dc6a4c96 (image=quay.io/ceph/ceph:v18, name=tender_yonath, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:26 compute-0 sudo[90686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph/ceph.conf.new
Jan 21 23:26:26 compute-0 sudo[90686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:26 compute-0 sudo[90686]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:26 compute-0 sudo[90712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:26 compute-0 sudo[90712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:26 compute-0 sudo[90712]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:26 compute-0 sudo[90737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 21 23:26:26 compute-0 sudo[90737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:26 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:26:26 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:26:26 compute-0 sudo[90737]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:26 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:26:26 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:26:26 compute-0 sudo[90762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:26 compute-0 sudo[90762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:26 compute-0 sudo[90762]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:26 compute-0 sudo[90787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config
Jan 21 23:26:26 compute-0 sudo[90787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:26 compute-0 sudo[90787]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:26 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:26:26 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:26:26 compute-0 sudo[90812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:26 compute-0 sudo[90812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:26 compute-0 sudo[90812]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:26 compute-0 sudo[90856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config
Jan 21 23:26:26 compute-0 sudo[90856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:26 compute-0 sudo[90856]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v115: 100 pgs: 18 peering, 82 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:26 compute-0 sudo[90881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:26 compute-0 sudo[90881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:26 compute-0 sudo[90881]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:26 compute-0 sudo[90906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf.new
Jan 21 23:26:26 compute-0 sudo[90906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:26 compute-0 sudo[90906]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:26 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 21 23:26:26 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4175265639' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 21 23:26:26 compute-0 tender_yonath[90682]: 
Jan 21 23:26:26 compute-0 tender_yonath[90682]: {"epoch":3,"fsid":"3759241a-7f1c-520d-ba17-879943ee2f00","modified":"2026-01-21T23:25:48.218361Z","created":"2026-01-21T23:22:46.964475Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Jan 21 23:26:26 compute-0 tender_yonath[90682]: dumped monmap epoch 3
Jan 21 23:26:26 compute-0 sudo[90931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:26 compute-0 sudo[90931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:26 compute-0 sudo[90931]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:26 compute-0 systemd[1]: libpod-a8eb13fb6587b6bf61a4d264f4f503f5b34c57a6959f6facf245bea7dc6a4c96.scope: Deactivated successfully.
Jan 21 23:26:26 compute-0 podman[90630]: 2026-01-21 23:26:26.661143165 +0000 UTC m=+0.834091714 container died a8eb13fb6587b6bf61a4d264f4f503f5b34c57a6959f6facf245bea7dc6a4c96 (image=quay.io/ceph/ceph:v18, name=tender_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 21 23:26:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-202b60e2d702aea4c43c07768b65ce955271d981922a79f5f0f5352af1f66df7-merged.mount: Deactivated successfully.
Jan 21 23:26:26 compute-0 podman[90630]: 2026-01-21 23:26:26.713038345 +0000 UTC m=+0.885986904 container remove a8eb13fb6587b6bf61a4d264f4f503f5b34c57a6959f6facf245bea7dc6a4c96 (image=quay.io/ceph/ceph:v18, name=tender_yonath, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 21 23:26:26 compute-0 systemd[1]: libpod-conmon-a8eb13fb6587b6bf61a4d264f4f503f5b34c57a6959f6facf245bea7dc6a4c96.scope: Deactivated successfully.
Jan 21 23:26:26 compute-0 sudo[90561]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:26 compute-0 sudo[90959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:26:26 compute-0 sudo[90959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:26 compute-0 sudo[90959]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:26 compute-0 sudo[90995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:26 compute-0 sudo[90995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:26 compute-0 sudo[90995]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:26 compute-0 sudo[91020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf.new
Jan 21 23:26:26 compute-0 sudo[91020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:26 compute-0 sudo[91020]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:26 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 21 23:26:26 compute-0 ceph-mon[74318]: OSD bench result of 4553.514024 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 23:26:26 compute-0 ceph-mon[74318]: 3.18 deep-scrub starts
Jan 21 23:26:26 compute-0 ceph-mon[74318]: 3.18 deep-scrub ok
Jan 21 23:26:26 compute-0 ceph-mon[74318]: osd.2 [v2:192.168.122.102:6800/3484655089,v1:192.168.122.102:6801/3484655089] boot
Jan 21 23:26:26 compute-0 ceph-mon[74318]: osdmap e38: 3 total, 3 up, 3 in
Jan 21 23:26:26 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 21 23:26:26 compute-0 ceph-mon[74318]: Updating compute-2:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:26:26 compute-0 ceph-mon[74318]: Updating compute-0:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:26:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/4175265639' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 21 23:26:27 compute-0 sudo[91068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:27 compute-0 sudo[91068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:27 compute-0 sudo[91068]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 21 23:26:27 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 21 23:26:27 compute-0 sudo[91093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf.new
Jan 21 23:26:27 compute-0 sudo[91093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:27 compute-0 sudo[91093]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:27 compute-0 sudo[91118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:27 compute-0 sudo[91118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:27 compute-0 sudo[91118]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:27 compute-0 sudo[91143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf.new
Jan 21 23:26:27 compute-0 sudo[91143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:27 compute-0 sudo[91143]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:26:27 compute-0 sudo[91192]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmwotxkulmdmadakncoifuptfjtzmtoz ; /usr/bin/python3'
Jan 21 23:26:27 compute-0 sudo[91192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:27 compute-0 sudo[91191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:27 compute-0 sudo[91191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:27 compute-0 sudo[91191]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:27 compute-0 sudo[91219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3759241a-7f1c-520d-ba17-879943ee2f00/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf.new /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:26:27 compute-0 sudo[91219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:27 compute-0 sudo[91219]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:26:27 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:26:27 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:26:27 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:26:27 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:27 compute-0 python3[91206]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:27 compute-0 podman[91244]: 2026-01-21 23:26:27.433154794 +0000 UTC m=+0.055265449 container create 12f42d1b2400289a1787281158a4c0d3e6307c05e1aec5e59fd0a44b378d220d (image=quay.io/ceph/ceph:v18, name=funny_wilson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 21 23:26:27 compute-0 systemd[1]: Started libpod-conmon-12f42d1b2400289a1787281158a4c0d3e6307c05e1aec5e59fd0a44b378d220d.scope.
Jan 21 23:26:27 compute-0 podman[91244]: 2026-01-21 23:26:27.408240226 +0000 UTC m=+0.030350931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:27 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1cd3b29cc99bf68ab864bc674d10d4db36cd995e2318f4dcae026483756931/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1cd3b29cc99bf68ab864bc674d10d4db36cd995e2318f4dcae026483756931/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:27 compute-0 podman[91244]: 2026-01-21 23:26:27.523596337 +0000 UTC m=+0.145707012 container init 12f42d1b2400289a1787281158a4c0d3e6307c05e1aec5e59fd0a44b378d220d (image=quay.io/ceph/ceph:v18, name=funny_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:26:27 compute-0 podman[91244]: 2026-01-21 23:26:27.534038679 +0000 UTC m=+0.156149314 container start 12f42d1b2400289a1787281158a4c0d3e6307c05e1aec5e59fd0a44b378d220d (image=quay.io/ceph/ceph:v18, name=funny_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 23:26:27 compute-0 podman[91244]: 2026-01-21 23:26:27.537695544 +0000 UTC m=+0.159806179 container attach 12f42d1b2400289a1787281158a4c0d3e6307c05e1aec5e59fd0a44b378d220d (image=quay.io/ceph/ceph:v18, name=funny_wilson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:26:27 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:26:27 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:26:27 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:27 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 4579aff3-a07b-485f-88a1-9a1a4190cdd7 does not exist
Jan 21 23:26:27 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 4e46282c-44d7-4cd0-ace8-d4c061b64387 does not exist
Jan 21 23:26:27 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 26d4ab72-e2eb-4ae9-8126-935152d43632 does not exist
Jan 21 23:26:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:26:27 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:26:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:26:27 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:26:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:26:27 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:27 compute-0 sudo[91264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:27 compute-0 sudo[91264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:27 compute-0 sudo[91264]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:27 compute-0 sudo[91289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:26:27 compute-0 sudo[91289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:27 compute-0 sudo[91289]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:27 compute-0 sudo[91324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:27 compute-0 sudo[91324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:27 compute-0 sudo[91324]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:27 compute-0 sudo[91358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:26:27 compute-0 sudo[91358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:28 compute-0 ceph-mon[74318]: Updating compute-1:/var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/config/ceph.conf
Jan 21 23:26:28 compute-0 ceph-mon[74318]: pgmap v115: 100 pgs: 18 peering, 82 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:28 compute-0 ceph-mon[74318]: osdmap e39: 3 total, 3 up, 3 in
Jan 21 23:26:28 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:28 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:28 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:28 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:28 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:28 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:28 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:28 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:26:28 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:26:28 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Jan 21 23:26:28 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1948935097' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 21 23:26:28 compute-0 funny_wilson[91260]: [client.openstack]
Jan 21 23:26:28 compute-0 funny_wilson[91260]:         key = AQCpX3FpAAAAABAAo4kgEsfAoeB8cTkM6A+wAA==
Jan 21 23:26:28 compute-0 funny_wilson[91260]:         caps mgr = "allow *"
Jan 21 23:26:28 compute-0 funny_wilson[91260]:         caps mon = "profile rbd"
Jan 21 23:26:28 compute-0 funny_wilson[91260]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 21 23:26:28 compute-0 systemd[1]: libpod-12f42d1b2400289a1787281158a4c0d3e6307c05e1aec5e59fd0a44b378d220d.scope: Deactivated successfully.
Jan 21 23:26:28 compute-0 podman[91244]: 2026-01-21 23:26:28.190186673 +0000 UTC m=+0.812297298 container died 12f42d1b2400289a1787281158a4c0d3e6307c05e1aec5e59fd0a44b378d220d (image=quay.io/ceph/ceph:v18, name=funny_wilson, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 23:26:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e1cd3b29cc99bf68ab864bc674d10d4db36cd995e2318f4dcae026483756931-merged.mount: Deactivated successfully.
Jan 21 23:26:28 compute-0 podman[91244]: 2026-01-21 23:26:28.23504411 +0000 UTC m=+0.857154735 container remove 12f42d1b2400289a1787281158a4c0d3e6307c05e1aec5e59fd0a44b378d220d (image=quay.io/ceph/ceph:v18, name=funny_wilson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:26:28 compute-0 systemd[1]: libpod-conmon-12f42d1b2400289a1787281158a4c0d3e6307c05e1aec5e59fd0a44b378d220d.scope: Deactivated successfully.
Jan 21 23:26:28 compute-0 sudo[91192]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:28 compute-0 podman[91433]: 2026-01-21 23:26:28.348987155 +0000 UTC m=+0.047303272 container create ed290f0a89635897c159125e116654d35512c169360b8bbc6e53a08273a37edf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:28 compute-0 systemd[1]: Started libpod-conmon-ed290f0a89635897c159125e116654d35512c169360b8bbc6e53a08273a37edf.scope.
Jan 21 23:26:28 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:28 compute-0 podman[91433]: 2026-01-21 23:26:28.324872508 +0000 UTC m=+0.023188685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:26:28 compute-0 podman[91433]: 2026-01-21 23:26:28.431472982 +0000 UTC m=+0.129789089 container init ed290f0a89635897c159125e116654d35512c169360b8bbc6e53a08273a37edf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 23:26:28 compute-0 podman[91433]: 2026-01-21 23:26:28.437813736 +0000 UTC m=+0.136129833 container start ed290f0a89635897c159125e116654d35512c169360b8bbc6e53a08273a37edf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:28 compute-0 podman[91433]: 2026-01-21 23:26:28.441180734 +0000 UTC m=+0.139496841 container attach ed290f0a89635897c159125e116654d35512c169360b8bbc6e53a08273a37edf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:26:28 compute-0 ecstatic_galois[91449]: 167 167
Jan 21 23:26:28 compute-0 systemd[1]: libpod-ed290f0a89635897c159125e116654d35512c169360b8bbc6e53a08273a37edf.scope: Deactivated successfully.
Jan 21 23:26:28 compute-0 podman[91433]: 2026-01-21 23:26:28.443143535 +0000 UTC m=+0.141459672 container died ed290f0a89635897c159125e116654d35512c169360b8bbc6e53a08273a37edf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:26:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbd0a2019c917818aac910235e5f5324433350333651093f4b459dd535794401-merged.mount: Deactivated successfully.
Jan 21 23:26:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v117: 100 pgs: 18 peering, 82 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:28 compute-0 podman[91433]: 2026-01-21 23:26:28.481199875 +0000 UTC m=+0.179515972 container remove ed290f0a89635897c159125e116654d35512c169360b8bbc6e53a08273a37edf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:26:28 compute-0 systemd[1]: libpod-conmon-ed290f0a89635897c159125e116654d35512c169360b8bbc6e53a08273a37edf.scope: Deactivated successfully.
Jan 21 23:26:28 compute-0 podman[91473]: 2026-01-21 23:26:28.681267871 +0000 UTC m=+0.057057065 container create 4f9cc0c6d24e49cc2f09a27f16d3568b18dff056cb7766ea2bcbf32719c6e9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 21 23:26:28 compute-0 systemd[1]: Started libpod-conmon-4f9cc0c6d24e49cc2f09a27f16d3568b18dff056cb7766ea2bcbf32719c6e9d1.scope.
Jan 21 23:26:28 compute-0 podman[91473]: 2026-01-21 23:26:28.651950738 +0000 UTC m=+0.027739982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:26:28 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a7dc1b481f3d73ea8c5f5f42fbea2a1448346f76feaf8e5d55e4c373ed4f9c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a7dc1b481f3d73ea8c5f5f42fbea2a1448346f76feaf8e5d55e4c373ed4f9c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a7dc1b481f3d73ea8c5f5f42fbea2a1448346f76feaf8e5d55e4c373ed4f9c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a7dc1b481f3d73ea8c5f5f42fbea2a1448346f76feaf8e5d55e4c373ed4f9c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a7dc1b481f3d73ea8c5f5f42fbea2a1448346f76feaf8e5d55e4c373ed4f9c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:28 compute-0 podman[91473]: 2026-01-21 23:26:28.800966686 +0000 UTC m=+0.176755860 container init 4f9cc0c6d24e49cc2f09a27f16d3568b18dff056cb7766ea2bcbf32719c6e9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 23:26:28 compute-0 podman[91473]: 2026-01-21 23:26:28.818282737 +0000 UTC m=+0.194071901 container start 4f9cc0c6d24e49cc2f09a27f16d3568b18dff056cb7766ea2bcbf32719c6e9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:26:28 compute-0 podman[91473]: 2026-01-21 23:26:28.822542007 +0000 UTC m=+0.198331161 container attach 4f9cc0c6d24e49cc2f09a27f16d3568b18dff056cb7766ea2bcbf32719c6e9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:29 compute-0 ceph-mon[74318]: 3.19 scrub starts
Jan 21 23:26:29 compute-0 ceph-mon[74318]: 3.19 scrub ok
Jan 21 23:26:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1948935097' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 21 23:26:29 compute-0 ceph-mon[74318]: 3.1e scrub starts
Jan 21 23:26:29 compute-0 ceph-mon[74318]: 3.1e scrub ok
Jan 21 23:26:29 compute-0 sudo[91648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohflimcftycelclsfowbqqjilwdgqjsv ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769037989.2957194-37555-153382790168444/async_wrapper.py j791402235648 30 /home/zuul/.ansible/tmp/ansible-tmp-1769037989.2957194-37555-153382790168444/AnsiballZ_command.py _'
Jan 21 23:26:29 compute-0 sudo[91648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:29 compute-0 sleepy_mclean[91489]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:26:29 compute-0 sleepy_mclean[91489]: --> relative data size: 1.0
Jan 21 23:26:29 compute-0 sleepy_mclean[91489]: --> All data devices are unavailable
Jan 21 23:26:29 compute-0 systemd[1]: libpod-4f9cc0c6d24e49cc2f09a27f16d3568b18dff056cb7766ea2bcbf32719c6e9d1.scope: Deactivated successfully.
Jan 21 23:26:29 compute-0 podman[91473]: 2026-01-21 23:26:29.779852928 +0000 UTC m=+1.155642162 container died 4f9cc0c6d24e49cc2f09a27f16d3568b18dff056cb7766ea2bcbf32719c6e9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 21 23:26:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a7dc1b481f3d73ea8c5f5f42fbea2a1448346f76feaf8e5d55e4c373ed4f9c6-merged.mount: Deactivated successfully.
Jan 21 23:26:29 compute-0 podman[91473]: 2026-01-21 23:26:29.849535231 +0000 UTC m=+1.225324385 container remove 4f9cc0c6d24e49cc2f09a27f16d3568b18dff056cb7766ea2bcbf32719c6e9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 21 23:26:29 compute-0 systemd[1]: libpod-conmon-4f9cc0c6d24e49cc2f09a27f16d3568b18dff056cb7766ea2bcbf32719c6e9d1.scope: Deactivated successfully.
Jan 21 23:26:29 compute-0 ansible-async_wrapper.py[91652]: Invoked with j791402235648 30 /home/zuul/.ansible/tmp/ansible-tmp-1769037989.2957194-37555-153382790168444/AnsiballZ_command.py _
Jan 21 23:26:29 compute-0 ansible-async_wrapper.py[91670]: Starting module and watcher
Jan 21 23:26:29 compute-0 ansible-async_wrapper.py[91670]: Start watching 91671 (30)
Jan 21 23:26:29 compute-0 ansible-async_wrapper.py[91671]: Start module (91671)
Jan 21 23:26:29 compute-0 sudo[91358]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:29 compute-0 ansible-async_wrapper.py[91652]: Return async_wrapper task started.
Jan 21 23:26:29 compute-0 sudo[91648]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:29 compute-0 sudo[91673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:29 compute-0 sudo[91673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:29 compute-0 sudo[91673]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:29 compute-0 sudo[91698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:26:29 compute-0 sudo[91698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:30 compute-0 sudo[91698]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:30 compute-0 python3[91672]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:30 compute-0 sudo[91723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:30 compute-0 ceph-mon[74318]: pgmap v117: 100 pgs: 18 peering, 82 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:30 compute-0 sudo[91723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:30 compute-0 sudo[91723]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:30 compute-0 podman[91736]: 2026-01-21 23:26:30.083871208 +0000 UTC m=+0.045405442 container create 0ba2c536ae707b326bb121daddcdd3ec9852fe12bce88ca7193384d9c53079c9 (image=quay.io/ceph/ceph:v18, name=exciting_booth, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 21 23:26:30 compute-0 systemd[1]: Started libpod-conmon-0ba2c536ae707b326bb121daddcdd3ec9852fe12bce88ca7193384d9c53079c9.scope.
Jan 21 23:26:30 compute-0 sudo[91759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:26:30 compute-0 sudo[91759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:30 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a7375f193d0f674837f330cdf0f8d34de6cdba945a13cc15f5542f9680354bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a7375f193d0f674837f330cdf0f8d34de6cdba945a13cc15f5542f9680354bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:30 compute-0 podman[91736]: 2026-01-21 23:26:30.062366759 +0000 UTC m=+0.023901043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:30 compute-0 podman[91736]: 2026-01-21 23:26:30.170480402 +0000 UTC m=+0.132014696 container init 0ba2c536ae707b326bb121daddcdd3ec9852fe12bce88ca7193384d9c53079c9 (image=quay.io/ceph/ceph:v18, name=exciting_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 21 23:26:30 compute-0 podman[91736]: 2026-01-21 23:26:30.190742199 +0000 UTC m=+0.152276443 container start 0ba2c536ae707b326bb121daddcdd3ec9852fe12bce88ca7193384d9c53079c9 (image=quay.io/ceph/ceph:v18, name=exciting_booth, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:30 compute-0 podman[91736]: 2026-01-21 23:26:30.211714655 +0000 UTC m=+0.173248919 container attach 0ba2c536ae707b326bb121daddcdd3ec9852fe12bce88ca7193384d9c53079c9 (image=quay.io/ceph/ceph:v18, name=exciting_booth, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v118: 100 pgs: 18 peering, 82 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:30 compute-0 podman[91833]: 2026-01-21 23:26:30.521145076 +0000 UTC m=+0.071819179 container create af24eaced1cb4d5e916d913152b019b7ce40e8b4adf2f1749ca887314ea31bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hertz, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 21 23:26:30 compute-0 systemd[1]: Started libpod-conmon-af24eaced1cb4d5e916d913152b019b7ce40e8b4adf2f1749ca887314ea31bcd.scope.
Jan 21 23:26:30 compute-0 podman[91833]: 2026-01-21 23:26:30.475286964 +0000 UTC m=+0.025961147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:26:30 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:30 compute-0 podman[91833]: 2026-01-21 23:26:30.685878403 +0000 UTC m=+0.236552546 container init af24eaced1cb4d5e916d913152b019b7ce40e8b4adf2f1749ca887314ea31bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hertz, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 21 23:26:30 compute-0 podman[91833]: 2026-01-21 23:26:30.696373107 +0000 UTC m=+0.247047210 container start af24eaced1cb4d5e916d913152b019b7ce40e8b4adf2f1749ca887314ea31bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hertz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 21 23:26:30 compute-0 goofy_hertz[91868]: 167 167
Jan 21 23:26:30 compute-0 systemd[1]: libpod-af24eaced1cb4d5e916d913152b019b7ce40e8b4adf2f1749ca887314ea31bcd.scope: Deactivated successfully.
Jan 21 23:26:30 compute-0 podman[91833]: 2026-01-21 23:26:30.701793488 +0000 UTC m=+0.252467611 container attach af24eaced1cb4d5e916d913152b019b7ce40e8b4adf2f1749ca887314ea31bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 21 23:26:30 compute-0 podman[91833]: 2026-01-21 23:26:30.702440524 +0000 UTC m=+0.253114627 container died af24eaced1cb4d5e916d913152b019b7ce40e8b4adf2f1749ca887314ea31bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 23:26:30 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14322 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 23:26:30 compute-0 exciting_booth[91787]: 
Jan 21 23:26:30 compute-0 exciting_booth[91787]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 21 23:26:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-abe63473a7a33aaae07c34e8258977410c632483e9f9450753235be64639d84e-merged.mount: Deactivated successfully.
Jan 21 23:26:30 compute-0 systemd[1]: libpod-0ba2c536ae707b326bb121daddcdd3ec9852fe12bce88ca7193384d9c53079c9.scope: Deactivated successfully.
Jan 21 23:26:30 compute-0 conmon[91787]: conmon 0ba2c536ae707b326bb1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ba2c536ae707b326bb121daddcdd3ec9852fe12bce88ca7193384d9c53079c9.scope/container/memory.events
Jan 21 23:26:30 compute-0 podman[91833]: 2026-01-21 23:26:30.802482988 +0000 UTC m=+0.353157091 container remove af24eaced1cb4d5e916d913152b019b7ce40e8b4adf2f1749ca887314ea31bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:30 compute-0 podman[91736]: 2026-01-21 23:26:30.80683113 +0000 UTC m=+0.768365364 container died 0ba2c536ae707b326bb121daddcdd3ec9852fe12bce88ca7193384d9c53079c9 (image=quay.io/ceph/ceph:v18, name=exciting_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 23:26:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a7375f193d0f674837f330cdf0f8d34de6cdba945a13cc15f5542f9680354bc-merged.mount: Deactivated successfully.
Jan 21 23:26:30 compute-0 podman[91736]: 2026-01-21 23:26:30.866656047 +0000 UTC m=+0.828190281 container remove 0ba2c536ae707b326bb121daddcdd3ec9852fe12bce88ca7193384d9c53079c9 (image=quay.io/ceph/ceph:v18, name=exciting_booth, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 21 23:26:30 compute-0 systemd[1]: libpod-conmon-0ba2c536ae707b326bb121daddcdd3ec9852fe12bce88ca7193384d9c53079c9.scope: Deactivated successfully.
Jan 21 23:26:30 compute-0 systemd[1]: libpod-conmon-af24eaced1cb4d5e916d913152b019b7ce40e8b4adf2f1749ca887314ea31bcd.scope: Deactivated successfully.
Jan 21 23:26:30 compute-0 ansible-async_wrapper.py[91671]: Module complete (91671)
Jan 21 23:26:31 compute-0 podman[91920]: 2026-01-21 23:26:31.047541264 +0000 UTC m=+0.081874312 container create 0b92494d85a7e770ebdb76d9045604e4b89b73c4a90c91260444e250ae3b3513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 21 23:26:31 compute-0 sudo[91968]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsyfsgijfzfnjmazytlorzddjlvpmdis ; /usr/bin/python3'
Jan 21 23:26:31 compute-0 sudo[91968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:31 compute-0 podman[91920]: 2026-01-21 23:26:30.990417248 +0000 UTC m=+0.024750306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:26:31 compute-0 systemd[1]: Started libpod-conmon-0b92494d85a7e770ebdb76d9045604e4b89b73c4a90c91260444e250ae3b3513.scope.
Jan 21 23:26:31 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6ae27d9261578418560dc755e10b7084557002c4575c816d4bc6ec89e5bbf8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6ae27d9261578418560dc755e10b7084557002c4575c816d4bc6ec89e5bbf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6ae27d9261578418560dc755e10b7084557002c4575c816d4bc6ec89e5bbf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6ae27d9261578418560dc755e10b7084557002c4575c816d4bc6ec89e5bbf8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:31 compute-0 podman[91920]: 2026-01-21 23:26:31.145708308 +0000 UTC m=+0.180041356 container init 0b92494d85a7e770ebdb76d9045604e4b89b73c4a90c91260444e250ae3b3513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 21 23:26:31 compute-0 podman[91920]: 2026-01-21 23:26:31.15498452 +0000 UTC m=+0.189317588 container start 0b92494d85a7e770ebdb76d9045604e4b89b73c4a90c91260444e250ae3b3513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 21 23:26:31 compute-0 podman[91920]: 2026-01-21 23:26:31.160844313 +0000 UTC m=+0.195177371 container attach 0b92494d85a7e770ebdb76d9045604e4b89b73c4a90c91260444e250ae3b3513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 23:26:31 compute-0 python3[91972]: ansible-ansible.legacy.async_status Invoked with jid=j791402235648.91652 mode=status _async_dir=/root/.ansible_async
Jan 21 23:26:31 compute-0 sudo[91968]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:31 compute-0 sudo[92024]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwuwjqcfglwboyseaadcwdopsjwweean ; /usr/bin/python3'
Jan 21 23:26:31 compute-0 sudo[92024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:31 compute-0 python3[92026]: ansible-ansible.legacy.async_status Invoked with jid=j791402235648.91652 mode=cleanup _async_dir=/root/.ansible_async
Jan 21 23:26:31 compute-0 sudo[92024]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]: {
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:     "1": [
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:         {
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:             "devices": [
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:                 "/dev/loop3"
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:             ],
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:             "lv_name": "ceph_lv0",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:             "lv_size": "7511998464",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:             "name": "ceph_lv0",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:             "tags": {
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:                 "ceph.cluster_name": "ceph",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:                 "ceph.crush_device_class": "",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:                 "ceph.encrypted": "0",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:                 "ceph.osd_id": "1",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:                 "ceph.type": "block",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:                 "ceph.vdo": "0"
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:             },
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:             "type": "block",
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:             "vg_name": "ceph_vg0"
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:         }
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]:     ]
Jan 21 23:26:31 compute-0 pensive_kapitsa[91973]: }
Jan 21 23:26:31 compute-0 systemd[1]: libpod-0b92494d85a7e770ebdb76d9045604e4b89b73c4a90c91260444e250ae3b3513.scope: Deactivated successfully.
Jan 21 23:26:31 compute-0 podman[91920]: 2026-01-21 23:26:31.964676899 +0000 UTC m=+0.999009997 container died 0b92494d85a7e770ebdb76d9045604e4b89b73c4a90c91260444e250ae3b3513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 21 23:26:31 compute-0 sudo[92055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etzgdkysstwiagfpsusgxtkourutgqvx ; /usr/bin/python3'
Jan 21 23:26:31 compute-0 sudo[92055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef6ae27d9261578418560dc755e10b7084557002c4575c816d4bc6ec89e5bbf8-merged.mount: Deactivated successfully.
Jan 21 23:26:32 compute-0 ceph-mon[74318]: pgmap v118: 100 pgs: 18 peering, 82 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:32 compute-0 ceph-mon[74318]: from='client.14322 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 23:26:32 compute-0 podman[91920]: 2026-01-21 23:26:32.088182652 +0000 UTC m=+1.122515690 container remove 0b92494d85a7e770ebdb76d9045604e4b89b73c4a90c91260444e250ae3b3513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 21 23:26:32 compute-0 systemd[1]: libpod-conmon-0b92494d85a7e770ebdb76d9045604e4b89b73c4a90c91260444e250ae3b3513.scope: Deactivated successfully.
Jan 21 23:26:32 compute-0 sudo[91759]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:32 compute-0 python3[92067]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:26:32 compute-0 sudo[92069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:32 compute-0 sudo[92069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:32 compute-0 sudo[92069]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:32 compute-0 podman[92083]: 2026-01-21 23:26:32.254909581 +0000 UTC m=+0.057384245 container create 313621a12493784cde74681409b94c52a31d9ce7a0df71bda53cdaf3a3eef3cd (image=quay.io/ceph/ceph:v18, name=musing_herschel, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 23:26:32 compute-0 systemd[1]: Started libpod-conmon-313621a12493784cde74681409b94c52a31d9ce7a0df71bda53cdaf3a3eef3cd.scope.
Jan 21 23:26:32 compute-0 sudo[92104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:26:32 compute-0 sudo[92104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:32 compute-0 sudo[92104]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:32 compute-0 podman[92083]: 2026-01-21 23:26:32.236781749 +0000 UTC m=+0.039256453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:32 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0a182b58e82d84d8472a370881edceff3e2831bb2ffe213d953c3913155fd5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0a182b58e82d84d8472a370881edceff3e2831bb2ffe213d953c3913155fd5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:32 compute-0 podman[92083]: 2026-01-21 23:26:32.357371477 +0000 UTC m=+0.159846171 container init 313621a12493784cde74681409b94c52a31d9ce7a0df71bda53cdaf3a3eef3cd (image=quay.io/ceph/ceph:v18, name=musing_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 21 23:26:32 compute-0 podman[92083]: 2026-01-21 23:26:32.369939064 +0000 UTC m=+0.172413728 container start 313621a12493784cde74681409b94c52a31d9ce7a0df71bda53cdaf3a3eef3cd (image=quay.io/ceph/ceph:v18, name=musing_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 21 23:26:32 compute-0 podman[92083]: 2026-01-21 23:26:32.372978294 +0000 UTC m=+0.175452988 container attach 313621a12493784cde74681409b94c52a31d9ce7a0df71bda53cdaf3a3eef3cd (image=quay.io/ceph/ceph:v18, name=musing_herschel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 21 23:26:32 compute-0 sudo[92137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:32 compute-0 sudo[92137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:32 compute-0 sudo[92137]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:32 compute-0 sudo[92163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:26:32 compute-0 sudo[92163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v119: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:32 compute-0 podman[92246]: 2026-01-21 23:26:32.778973658 +0000 UTC m=+0.040507275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:26:32 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14328 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 23:26:32 compute-0 musing_herschel[92133]: 
Jan 21 23:26:32 compute-0 musing_herschel[92133]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 21 23:26:32 compute-0 systemd[1]: libpod-313621a12493784cde74681409b94c52a31d9ce7a0df71bda53cdaf3a3eef3cd.scope: Deactivated successfully.
Jan 21 23:26:33 compute-0 podman[92246]: 2026-01-21 23:26:33.078238915 +0000 UTC m=+0.339772522 container create ae8691f4cdbd326e024b19186e290666aa12c2159611961d017309802388e7c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pascal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 21 23:26:33 compute-0 ceph-mon[74318]: 2.18 deep-scrub starts
Jan 21 23:26:33 compute-0 ceph-mon[74318]: 2.18 deep-scrub ok
Jan 21 23:26:33 compute-0 ceph-mon[74318]: 3.1f scrub starts
Jan 21 23:26:33 compute-0 ceph-mon[74318]: 3.1f scrub ok
Jan 21 23:26:33 compute-0 systemd[1]: Started libpod-conmon-ae8691f4cdbd326e024b19186e290666aa12c2159611961d017309802388e7c0.scope.
Jan 21 23:26:33 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:33 compute-0 podman[92246]: 2026-01-21 23:26:33.27018478 +0000 UTC m=+0.531718397 container init ae8691f4cdbd326e024b19186e290666aa12c2159611961d017309802388e7c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:26:33 compute-0 podman[92246]: 2026-01-21 23:26:33.277963483 +0000 UTC m=+0.539497100 container start ae8691f4cdbd326e024b19186e290666aa12c2159611961d017309802388e7c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pascal, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:26:33 compute-0 vibrant_pascal[92276]: 167 167
Jan 21 23:26:33 compute-0 systemd[1]: libpod-ae8691f4cdbd326e024b19186e290666aa12c2159611961d017309802388e7c0.scope: Deactivated successfully.
Jan 21 23:26:33 compute-0 podman[92246]: 2026-01-21 23:26:33.284586075 +0000 UTC m=+0.546119692 container attach ae8691f4cdbd326e024b19186e290666aa12c2159611961d017309802388e7c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 23:26:33 compute-0 podman[92246]: 2026-01-21 23:26:33.286359201 +0000 UTC m=+0.547892798 container died ae8691f4cdbd326e024b19186e290666aa12c2159611961d017309802388e7c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 23:26:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d7d3bdfb6b234fb00d7d9f648481393bff5c16a3d862cc1900c505128e4c33f-merged.mount: Deactivated successfully.
Jan 21 23:26:33 compute-0 podman[92246]: 2026-01-21 23:26:33.33169878 +0000 UTC m=+0.593232367 container remove ae8691f4cdbd326e024b19186e290666aa12c2159611961d017309802388e7c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pascal, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 21 23:26:33 compute-0 systemd[1]: libpod-conmon-ae8691f4cdbd326e024b19186e290666aa12c2159611961d017309802388e7c0.scope: Deactivated successfully.
Jan 21 23:26:33 compute-0 podman[92083]: 2026-01-21 23:26:33.367068971 +0000 UTC m=+1.169543625 container died 313621a12493784cde74681409b94c52a31d9ce7a0df71bda53cdaf3a3eef3cd (image=quay.io/ceph/ceph:v18, name=musing_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae0a182b58e82d84d8472a370881edceff3e2831bb2ffe213d953c3913155fd5-merged.mount: Deactivated successfully.
Jan 21 23:26:33 compute-0 podman[92083]: 2026-01-21 23:26:33.419705591 +0000 UTC m=+1.222180285 container remove 313621a12493784cde74681409b94c52a31d9ce7a0df71bda53cdaf3a3eef3cd (image=quay.io/ceph/ceph:v18, name=musing_herschel, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 21 23:26:33 compute-0 sudo[92055]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:33 compute-0 systemd[1]: libpod-conmon-313621a12493784cde74681409b94c52a31d9ce7a0df71bda53cdaf3a3eef3cd.scope: Deactivated successfully.
Jan 21 23:26:33 compute-0 podman[92301]: 2026-01-21 23:26:33.565198157 +0000 UTC m=+0.060826674 container create f974096e7f20334529b0e6af5c0cbbd2fb99420b8b34e9dbbc7373a6a4f5689c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:26:33 compute-0 systemd[1]: Started libpod-conmon-f974096e7f20334529b0e6af5c0cbbd2fb99420b8b34e9dbbc7373a6a4f5689c.scope.
Jan 21 23:26:33 compute-0 podman[92301]: 2026-01-21 23:26:33.541781237 +0000 UTC m=+0.037409824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:26:33 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c84d2a99e8f8ea455a7c9a0a6c3c9036048ee5ba4ff1c14a31678afee6e71aa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c84d2a99e8f8ea455a7c9a0a6c3c9036048ee5ba4ff1c14a31678afee6e71aa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c84d2a99e8f8ea455a7c9a0a6c3c9036048ee5ba4ff1c14a31678afee6e71aa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c84d2a99e8f8ea455a7c9a0a6c3c9036048ee5ba4ff1c14a31678afee6e71aa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:33 compute-0 podman[92301]: 2026-01-21 23:26:33.66486713 +0000 UTC m=+0.160495727 container init f974096e7f20334529b0e6af5c0cbbd2fb99420b8b34e9dbbc7373a6a4f5689c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 21 23:26:33 compute-0 podman[92301]: 2026-01-21 23:26:33.681214285 +0000 UTC m=+0.176842792 container start f974096e7f20334529b0e6af5c0cbbd2fb99420b8b34e9dbbc7373a6a4f5689c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 21 23:26:33 compute-0 podman[92301]: 2026-01-21 23:26:33.685492997 +0000 UTC m=+0.181121504 container attach f974096e7f20334529b0e6af5c0cbbd2fb99420b8b34e9dbbc7373a6a4f5689c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 23:26:34 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 21 23:26:34 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 21 23:26:34 compute-0 sudo[92346]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnhrlyrdecwqfmoduumakwpncgxxlxsp ; /usr/bin/python3'
Jan 21 23:26:34 compute-0 sudo[92346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:34 compute-0 ceph-mon[74318]: pgmap v119: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:34 compute-0 ceph-mon[74318]: from='client.14328 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 23:26:34 compute-0 python3[92348]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:34 compute-0 podman[92354]: 2026-01-21 23:26:34.462652749 +0000 UTC m=+0.068676048 container create bd999916c22e1ca98382373ae245b70418587f1dc39988566148387e0b3eade1 (image=quay.io/ceph/ceph:v18, name=condescending_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v120: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:34 compute-0 systemd[1]: Started libpod-conmon-bd999916c22e1ca98382373ae245b70418587f1dc39988566148387e0b3eade1.scope.
Jan 21 23:26:34 compute-0 cranky_villani[92318]: {
Jan 21 23:26:34 compute-0 cranky_villani[92318]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:26:34 compute-0 cranky_villani[92318]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:26:34 compute-0 cranky_villani[92318]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:26:34 compute-0 cranky_villani[92318]:         "osd_id": 1,
Jan 21 23:26:34 compute-0 cranky_villani[92318]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:26:34 compute-0 cranky_villani[92318]:         "type": "bluestore"
Jan 21 23:26:34 compute-0 cranky_villani[92318]:     }
Jan 21 23:26:34 compute-0 cranky_villani[92318]: }
Jan 21 23:26:34 compute-0 podman[92354]: 2026-01-21 23:26:34.43193166 +0000 UTC m=+0.037955009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:34 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f529c37c691eac7e14de890dee07eb57bdfe0cf6e33eaafa09e595b23da48de/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f529c37c691eac7e14de890dee07eb57bdfe0cf6e33eaafa09e595b23da48de/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:34 compute-0 systemd[1]: libpod-f974096e7f20334529b0e6af5c0cbbd2fb99420b8b34e9dbbc7373a6a4f5689c.scope: Deactivated successfully.
Jan 21 23:26:34 compute-0 podman[92301]: 2026-01-21 23:26:34.564308134 +0000 UTC m=+1.059936671 container died f974096e7f20334529b0e6af5c0cbbd2fb99420b8b34e9dbbc7373a6a4f5689c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:26:34 compute-0 podman[92354]: 2026-01-21 23:26:34.564407047 +0000 UTC m=+0.170430356 container init bd999916c22e1ca98382373ae245b70418587f1dc39988566148387e0b3eade1 (image=quay.io/ceph/ceph:v18, name=condescending_hellman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:26:34 compute-0 podman[92354]: 2026-01-21 23:26:34.574851669 +0000 UTC m=+0.180874928 container start bd999916c22e1ca98382373ae245b70418587f1dc39988566148387e0b3eade1 (image=quay.io/ceph/ceph:v18, name=condescending_hellman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:26:34 compute-0 podman[92354]: 2026-01-21 23:26:34.578513214 +0000 UTC m=+0.184536513 container attach bd999916c22e1ca98382373ae245b70418587f1dc39988566148387e0b3eade1 (image=quay.io/ceph/ceph:v18, name=condescending_hellman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 23:26:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c84d2a99e8f8ea455a7c9a0a6c3c9036048ee5ba4ff1c14a31678afee6e71aa2-merged.mount: Deactivated successfully.
Jan 21 23:26:34 compute-0 podman[92301]: 2026-01-21 23:26:34.63333289 +0000 UTC m=+1.128961417 container remove f974096e7f20334529b0e6af5c0cbbd2fb99420b8b34e9dbbc7373a6a4f5689c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:26:34 compute-0 systemd[1]: libpod-conmon-f974096e7f20334529b0e6af5c0cbbd2fb99420b8b34e9dbbc7373a6a4f5689c.scope: Deactivated successfully.
Jan 21 23:26:34 compute-0 sudo[92163]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:26:34 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:26:34 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:34 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev 414e3ec2-4694-42fe-9950-db1f71aa0dcf (Updating rgw.rgw deployment (+3 -> 3))
Jan 21 23:26:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.eaptiy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 21 23:26:34 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.eaptiy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 23:26:34 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.eaptiy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 23:26:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 21 23:26:34 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:26:34 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:34 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.eaptiy on compute-2
Jan 21 23:26:34 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.eaptiy on compute-2
Jan 21 23:26:34 compute-0 ansible-async_wrapper.py[91670]: Done in kid B.
Jan 21 23:26:35 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14334 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 23:26:35 compute-0 condescending_hellman[92381]: 
Jan 21 23:26:35 compute-0 condescending_hellman[92381]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Jan 21 23:26:35 compute-0 systemd[1]: libpod-bd999916c22e1ca98382373ae245b70418587f1dc39988566148387e0b3eade1.scope: Deactivated successfully.
Jan 21 23:26:35 compute-0 podman[92354]: 2026-01-21 23:26:35.221285419 +0000 UTC m=+0.827308698 container died bd999916c22e1ca98382373ae245b70418587f1dc39988566148387e0b3eade1 (image=quay.io/ceph/ceph:v18, name=condescending_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f529c37c691eac7e14de890dee07eb57bdfe0cf6e33eaafa09e595b23da48de-merged.mount: Deactivated successfully.
Jan 21 23:26:35 compute-0 podman[92354]: 2026-01-21 23:26:35.283976101 +0000 UTC m=+0.889999360 container remove bd999916c22e1ca98382373ae245b70418587f1dc39988566148387e0b3eade1 (image=quay.io/ceph/ceph:v18, name=condescending_hellman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 21 23:26:35 compute-0 systemd[1]: libpod-conmon-bd999916c22e1ca98382373ae245b70418587f1dc39988566148387e0b3eade1.scope: Deactivated successfully.
Jan 21 23:26:35 compute-0 sudo[92346]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:35 compute-0 ceph-mon[74318]: 3.13 scrub starts
Jan 21 23:26:35 compute-0 ceph-mon[74318]: 3.13 scrub ok
Jan 21 23:26:35 compute-0 ceph-mon[74318]: 4.4 deep-scrub starts
Jan 21 23:26:35 compute-0 ceph-mon[74318]: 4.4 deep-scrub ok
Jan 21 23:26:35 compute-0 ceph-mon[74318]: pgmap v120: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:35 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:35 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:35 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.eaptiy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 23:26:35 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.eaptiy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 23:26:35 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:35 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:35 compute-0 ceph-mon[74318]: Deploying daemon rgw.rgw.compute-2.eaptiy on compute-2
Jan 21 23:26:36 compute-0 sudo[92453]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwcwmxupyhsruaytfxpdtpxfiivrxvwg ; /usr/bin/python3'
Jan 21 23:26:36 compute-0 sudo[92453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:36 compute-0 python3[92455]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:36 compute-0 podman[92456]: 2026-01-21 23:26:36.41224183 +0000 UTC m=+0.057486567 container create d003219588720250ecb18afe2e269959ea974fb2c5de631b8f962f3fe0fb3143 (image=quay.io/ceph/ceph:v18, name=sleepy_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:36 compute-0 systemd[1]: Started libpod-conmon-d003219588720250ecb18afe2e269959ea974fb2c5de631b8f962f3fe0fb3143.scope.
Jan 21 23:26:36 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1459a0f79eaf0787b38c7ac46d5a901261cc3a7e9f8418d4fc83bcd308b485/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1459a0f79eaf0787b38c7ac46d5a901261cc3a7e9f8418d4fc83bcd308b485/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v121: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:36 compute-0 podman[92456]: 2026-01-21 23:26:36.392821475 +0000 UTC m=+0.038066192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:36 compute-0 podman[92456]: 2026-01-21 23:26:36.495916357 +0000 UTC m=+0.141161094 container init d003219588720250ecb18afe2e269959ea974fb2c5de631b8f962f3fe0fb3143 (image=quay.io/ceph/ceph:v18, name=sleepy_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:26:36 compute-0 podman[92456]: 2026-01-21 23:26:36.502994212 +0000 UTC m=+0.148238939 container start d003219588720250ecb18afe2e269959ea974fb2c5de631b8f962f3fe0fb3143 (image=quay.io/ceph/ceph:v18, name=sleepy_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:26:36 compute-0 podman[92456]: 2026-01-21 23:26:36.50678632 +0000 UTC m=+0.152031047 container attach d003219588720250ecb18afe2e269959ea974fb2c5de631b8f962f3fe0fb3143 (image=quay.io/ceph/ceph:v18, name=sleepy_goodall, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 23:26:36 compute-0 ceph-mon[74318]: from='client.14334 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 23:26:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:26:36 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:26:36 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 21 23:26:36 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.ekhhbx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 21 23:26:36 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.ekhhbx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 23:26:36 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.ekhhbx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 23:26:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 21 23:26:37 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:26:37 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:37 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.ekhhbx on compute-1
Jan 21 23:26:37 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.ekhhbx on compute-1
Jan 21 23:26:37 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.14340 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 23:26:37 compute-0 sleepy_goodall[92471]: 
Jan 21 23:26:37 compute-0 sleepy_goodall[92471]: [{"container_id": "fccf1150c9b9", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.76%", "created": "2026-01-21T23:24:09.643834Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-21T23:24:09.707675Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T23:25:11.691452Z", "memory_usage": 11597250, "ports": [], "service_name": "crash", "started": "2026-01-21T23:24:09.502411Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3759241a-7f1c-520d-ba17-879943ee2f00@crash.compute-0", "version": "18.2.7"}, {"container_id": "fc32db6efcae", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.74%", "created": "2026-01-21T23:24:52.649769Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "events": ["2026-01-21T23:24:52.701461Z daemon:crash.compute-1 [INFO] \"Deployed crash.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-21T23:26:20.738845Z", "memory_usage": 11733565, "ports": [], "service_name": "crash", "started": "2026-01-21T23:24:52.541708Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3759241a-7f1c-520d-ba17-879943ee2f00@crash.compute-1", "version": "18.2.7"}, {"container_id": "bb02e958e6ab", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.12%", "created": "2026-01-21T23:26:01.208703Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "events": ["2026-01-21T23:26:01.299820Z daemon:crash.compute-2 [INFO] \"Deployed crash.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-21T23:26:20.489760Z", "memory_usage": 11660165, "ports": [], "service_name": "crash", "started": "2026-01-21T23:26:01.105167Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3759241a-7f1c-520d-ba17-879943ee2f00@crash.compute-2", "version": "18.2.7"}, {"container_id": "1a53c738ce79", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "37.35%", "created": "2026-01-21T23:22:54.371504Z", "daemon_id": "compute-0.boqcsl", "daemon_name": "mgr.compute-0.boqcsl", "daemon_type": "mgr", "events": ["2026-01-21T23:24:13.258471Z daemon:mgr.compute-0.boqcsl [INFO] \"Reconfigured mgr.compute-0.boqcsl on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T23:25:11.691315Z", "memory_usage": 547042099, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-21T23:22:54.257586Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3759241a-7f1c-520d-ba17-879943ee2f00@mgr.compute-0.boqcsl", "version": "18.2.7"}, {"container_id": "cabedca580a2", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "99.98%", "created": "2026-01-21T23:25:55.586257Z", "daemon_id": "compute-1.ihmngr", "daemon_name": "mgr.compute-1.ihmngr", "daemon_type": "mgr", "events": ["2026-01-21T23:25:55.664120Z daemon:mgr.compute-1.ihmngr [INFO] \"Deployed mgr.compute-1.ihmngr on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-21T23:26:20.739241Z", "memory_usage": 500065894, "ports": [8765], "service_name": "mgr", "started": "2026-01-21T23:25:55.442621Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3759241a-7f1c-520d-ba17-879943ee2f00@mgr.compute-1.ihmngr", "version": "18.2.7"}, {"container_id": "9d8515ffedb4", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "64.21%", "created": "2026-01-21T23:25:48.493233Z", "daemon_id": "compute-2.uvjsro", "daemon_name": "mgr.compute-2.uvjsro", "daemon_type": "mgr", "events": ["2026-01-21T23:25:53.314344Z daemon:mgr.compute-2.uvjsro [INFO] \"Deployed mgr.compute-2.uvjsro on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-21T23:26:20.489650Z", "memory_usage": 511180800, "ports": [8765], "service_name": "mgr", "started": "2026-01-21T23:25:48.388327Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3759241a-7f1c-520d-ba17-879943ee2f00@mgr.compute-2.uvjsro", "version": "18.2.7"}, {"container_id": "0441eddad815", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.59%", "created": "2026-01-21T23:22:49.155342Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-21T23:24:12.342251Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T23:25:11.691149Z", "memory_request": 2147483648, "memory_usage": 31687966, "ports": [], "service_name": "mon", "started": "2026-01-21T23:22:51.956331Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3759241a-7f1c-520d-ba17-879943ee2f00@mon.compute-0", "version": "18.2.7"}, {"container_id": "476f337d7b3d", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.77%", "created": "2026-01-21T23:25:44.032915Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "events": ["2026-01-21T23:25:46.680762Z daemon:mon.compute-1 [INFO] \"Deployed mon.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-21T23:26:20.739119Z", "memory_request": 2147483648, "memory_usage": 28668067, "ports": [], "service_name": "mon", "started": "2026-01-21T23:25:43.895106Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3759241a-7f1c-520d-ba17-879943ee2f00@mon.compute-1", "version": "18.2.7"}, {"container_id": "e641d513b055", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.67%", "created": "2026-01-21T23:25:41.468973Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "events": ["2026-01-21T23:25:41.554489Z daemon:mon.compute-2 [INFO] \"Deployed mon.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-21T23:26:20.489453Z", "memory_request": 2147483648, "memory_usage": 29108469, "ports": [], "service_name": "mon", "started": "2026-01-21T23:25:41.316663Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3759241a-7f1c-520d-ba17-879943ee2f00@mon.compute-2", "version": "18.2.7"}, {"container_id": "2c6a03273f20", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "7.59%", "created": "2026-01-21T23:25:07.580676Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-21T23:25:07.646122Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T23:25:11.691660Z", "memory_request": 4294967296, "memory_usage": 31866224, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-21T23:25:07.450607Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3759241a-7f1c-520d-ba17-879943ee2f00@osd.1", "version": "18.2.7"}, {"container_id": "76f718770f66", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.03%", "created": "2026-01-21T23:25:05.495955Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-21T23:25:05.899282Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-21T23:26:20.738995Z", "memory_request": 5502923980, "memory_usage": 63522734, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-21T23:25:05.361288Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3759241a-7f1c-520d-ba17-879943ee2f00@osd.0", "version": "18.2.7"}, {"container_id": "c623723cca85", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "8.54%", "created": "2026-01-21T23:26:15.846245Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-21T23:26:15.911453Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-21T23:26:20.489838Z", "memory_request": 4294967296, "memory_usage": 32768000, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-21T23:26:15.723105Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3759241a-7f1c-520d-ba17-879943ee2f00@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-2.eaptiy", "daemon_name": "rgw.rgw.compute-2.eaptiy", "daemon_type": "rgw", "events": ["2026-01-21T23:26:36.957613Z daemon:rgw.rgw.compute-2.eaptiy [INFO] \"Deployed rgw.rgw.compute-2.eaptiy on host 'compute-2'\""], "hostname": "compute-2", "ip": "192.168.122.102", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Jan 21 23:26:37 compute-0 systemd[1]: libpod-d003219588720250ecb18afe2e269959ea974fb2c5de631b8f962f3fe0fb3143.scope: Deactivated successfully.
Jan 21 23:26:37 compute-0 podman[92456]: 2026-01-21 23:26:37.129707379 +0000 UTC m=+0.774952106 container died d003219588720250ecb18afe2e269959ea974fb2c5de631b8f962f3fe0fb3143 (image=quay.io/ceph/ceph:v18, name=sleepy_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 23:26:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d1459a0f79eaf0787b38c7ac46d5a901261cc3a7e9f8418d4fc83bcd308b485-merged.mount: Deactivated successfully.
Jan 21 23:26:37 compute-0 podman[92456]: 2026-01-21 23:26:37.177256737 +0000 UTC m=+0.822501444 container remove d003219588720250ecb18afe2e269959ea974fb2c5de631b8f962f3fe0fb3143 (image=quay.io/ceph/ceph:v18, name=sleepy_goodall, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:26:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:26:37 compute-0 systemd[1]: libpod-conmon-d003219588720250ecb18afe2e269959ea974fb2c5de631b8f962f3fe0fb3143.scope: Deactivated successfully.
Jan 21 23:26:37 compute-0 sudo[92453]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:37 compute-0 rsyslogd[1006]: message too long (13203) with configured size 8096, begin of message is: [{"container_id": "fccf1150c9b9", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 21 23:26:37 compute-0 ceph-mon[74318]: 4.7 scrub starts
Jan 21 23:26:37 compute-0 ceph-mon[74318]: 4.7 scrub ok
Jan 21 23:26:37 compute-0 ceph-mon[74318]: pgmap v121: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:37 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:37 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:37 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:37 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.ekhhbx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 23:26:37 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.ekhhbx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 23:26:37 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:37 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:37 compute-0 ceph-mon[74318]: Deploying daemon rgw.rgw.compute-1.ekhhbx on compute-1
Jan 21 23:26:37 compute-0 ceph-mon[74318]: from='client.14340 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 23:26:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 21 23:26:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 21 23:26:37 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 21 23:26:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Jan 21 23:26:37 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 21 23:26:37 compute-0 sudo[92535]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqslmdwnmutjpiidrpcfkjtygcxggkmx ; /usr/bin/python3'
Jan 21 23:26:38 compute-0 sudo[92535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:38 compute-0 python3[92537]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:38 compute-0 podman[92538]: 2026-01-21 23:26:38.236991992 +0000 UTC m=+0.064045977 container create a815b60f5781bf5d0e07d98aef7b6e5ca56ca0e49dee8e2792191374b168aff3 (image=quay.io/ceph/ceph:v18, name=cool_taussig, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:26:38 compute-0 systemd[1]: Started libpod-conmon-a815b60f5781bf5d0e07d98aef7b6e5ca56ca0e49dee8e2792191374b168aff3.scope.
Jan 21 23:26:38 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:38 compute-0 podman[92538]: 2026-01-21 23:26:38.212100164 +0000 UTC m=+0.039154149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/999b36d0becaccdda71f3972d7b3aa155d44a6cd5a7f0d8335091bcc9a221b02/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/999b36d0becaccdda71f3972d7b3aa155d44a6cd5a7f0d8335091bcc9a221b02/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:38 compute-0 podman[92538]: 2026-01-21 23:26:38.318185225 +0000 UTC m=+0.145239170 container init a815b60f5781bf5d0e07d98aef7b6e5ca56ca0e49dee8e2792191374b168aff3 (image=quay.io/ceph/ceph:v18, name=cool_taussig, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:38 compute-0 podman[92538]: 2026-01-21 23:26:38.323423201 +0000 UTC m=+0.150477146 container start a815b60f5781bf5d0e07d98aef7b6e5ca56ca0e49dee8e2792191374b168aff3 (image=quay.io/ceph/ceph:v18, name=cool_taussig, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 23:26:38 compute-0 podman[92538]: 2026-01-21 23:26:38.326398698 +0000 UTC m=+0.153452643 container attach a815b60f5781bf5d0e07d98aef7b6e5ca56ca0e49dee8e2792191374b168aff3 (image=quay.io/ceph/ceph:v18, name=cool_taussig, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 23:26:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v123: 101 pgs: 1 unknown, 100 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:26:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 21 23:26:38 compute-0 ceph-mon[74318]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 23:26:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 21 23:26:38 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1659848739' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 21 23:26:38 compute-0 cool_taussig[92553]: 
Jan 21 23:26:38 compute-0 cool_taussig[92553]: {"fsid":"3759241a-7f1c-520d-ba17-879943ee2f00","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":45,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":40,"num_osds":3,"num_up_osds":3,"osd_up_since":1769037985,"num_in_osds":3,"osd_in_since":1769037964,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":100}],"num_pgs":100,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84054016,"bytes_avail":22451941376,"bytes_total":22535995392},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2026-01-21T23:26:20.476937+0000","services":{"mgr":{"daemons":{"summary":"","compute-2.uvjsro":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"414e3ec2-4694-42fe-9950-db1f71aa0dcf":{"message":"Updating rgw.rgw deployment (+3 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 21 23:26:39 compute-0 systemd[1]: libpod-a815b60f5781bf5d0e07d98aef7b6e5ca56ca0e49dee8e2792191374b168aff3.scope: Deactivated successfully.
Jan 21 23:26:39 compute-0 podman[92538]: 2026-01-21 23:26:39.002659346 +0000 UTC m=+0.829713301 container died a815b60f5781bf5d0e07d98aef7b6e5ca56ca0e49dee8e2792191374b168aff3 (image=quay.io/ceph/ceph:v18, name=cool_taussig, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 21 23:26:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:26:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 21 23:26:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 21 23:26:39 compute-0 ceph-mon[74318]: osdmap e40: 3 total, 3 up, 3 in
Jan 21 23:26:39 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2313988470' entity='client.rgw.rgw.compute-2.eaptiy' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 21 23:26:39 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 21 23:26:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-999b36d0becaccdda71f3972d7b3aa155d44a6cd5a7f0d8335091bcc9a221b02-merged.mount: Deactivated successfully.
Jan 21 23:26:39 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 21 23:26:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 21 23:26:39 compute-0 podman[92538]: 2026-01-21 23:26:39.131017446 +0000 UTC m=+0.958071401 container remove a815b60f5781bf5d0e07d98aef7b6e5ca56ca0e49dee8e2792191374b168aff3 (image=quay.io/ceph/ceph:v18, name=cool_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 23:26:39 compute-0 systemd[1]: libpod-conmon-a815b60f5781bf5d0e07d98aef7b6e5ca56ca0e49dee8e2792191374b168aff3.scope: Deactivated successfully.
Jan 21 23:26:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.quiikw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 21 23:26:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.quiikw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:26:39
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Some PGs (0.009901) are unknown; try again later
Jan 21 23:26:39 compute-0 sudo[92535]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.quiikw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 23:26:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 21 23:26:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:26:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.quiikw on compute-0
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.quiikw on compute-0
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:26:39 compute-0 sudo[92595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:39 compute-0 sudo[92595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:39 compute-0 sudo[92595]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:26:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:26:39 compute-0 sudo[92620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:26:39 compute-0 sudo[92620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:39 compute-0 sudo[92620]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:39 compute-0 sudo[92645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:39 compute-0 sudo[92645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:39 compute-0 sudo[92645]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:39 compute-0 sudo[92670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:26:39 compute-0 sudo[92670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:39 compute-0 podman[92733]: 2026-01-21 23:26:39.903064525 +0000 UTC m=+0.071628525 container create d16831de43ab2322c7cef221caaa33725216125571460a91b3d15d02e5d1414a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 23:26:39 compute-0 systemd[1]: Started libpod-conmon-d16831de43ab2322c7cef221caaa33725216125571460a91b3d15d02e5d1414a.scope.
Jan 21 23:26:39 compute-0 podman[92733]: 2026-01-21 23:26:39.877060498 +0000 UTC m=+0.045624578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:26:39 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:39 compute-0 sudo[92774]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggaajfftczwdsnigzxfdarabqdbquvrr ; /usr/bin/python3'
Jan 21 23:26:39 compute-0 sudo[92774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:40 compute-0 podman[92733]: 2026-01-21 23:26:40.060227355 +0000 UTC m=+0.228791375 container init d16831de43ab2322c7cef221caaa33725216125571460a91b3d15d02e5d1414a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:26:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 21 23:26:40 compute-0 podman[92733]: 2026-01-21 23:26:40.067086553 +0000 UTC m=+0.235650573 container start d16831de43ab2322c7cef221caaa33725216125571460a91b3d15d02e5d1414a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:40 compute-0 nice_shtern[92773]: 167 167
Jan 21 23:26:40 compute-0 systemd[1]: libpod-d16831de43ab2322c7cef221caaa33725216125571460a91b3d15d02e5d1414a.scope: Deactivated successfully.
Jan 21 23:26:40 compute-0 podman[92733]: 2026-01-21 23:26:40.077146095 +0000 UTC m=+0.245710135 container attach d16831de43ab2322c7cef221caaa33725216125571460a91b3d15d02e5d1414a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:26:40 compute-0 podman[92733]: 2026-01-21 23:26:40.077520185 +0000 UTC m=+0.246084195 container died d16831de43ab2322c7cef221caaa33725216125571460a91b3d15d02e5d1414a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 21 23:26:40 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 21 23:26:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 21 23:26:40 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.ekhhbx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 21 23:26:40 compute-0 ceph-mon[74318]: pgmap v123: 101 pgs: 1 unknown, 100 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:40 compute-0 ceph-mon[74318]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 21 23:26:40 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1659848739' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 21 23:26:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:40 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 21 23:26:40 compute-0 ceph-mon[74318]: osdmap e41: 3 total, 3 up, 3 in
Jan 21 23:26:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.quiikw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 21 23:26:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.quiikw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 23:26:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-6870a69bcf89636b12e9130e5736edd32cf120f88b11ce3892ef1d181085e31d-merged.mount: Deactivated successfully.
Jan 21 23:26:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 21 23:26:40 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 21 23:26:40 compute-0 podman[92733]: 2026-01-21 23:26:40.142901025 +0000 UTC m=+0.311465025 container remove d16831de43ab2322c7cef221caaa33725216125571460a91b3d15d02e5d1414a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 21 23:26:40 compute-0 systemd[1]: libpod-conmon-d16831de43ab2322c7cef221caaa33725216125571460a91b3d15d02e5d1414a.scope: Deactivated successfully.
Jan 21 23:26:40 compute-0 python3[92778]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:40 compute-0 systemd[1]: Reloading.
Jan 21 23:26:40 compute-0 podman[92796]: 2026-01-21 23:26:40.235866115 +0000 UTC m=+0.050371762 container create 176238dc26fd102a84d7ebe1516a4549cc2bc2c4c739f8ff4108a0e4d2e5278f (image=quay.io/ceph/ceph:v18, name=hungry_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 21 23:26:40 compute-0 ceph-mgr[74614]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 21 23:26:40 compute-0 systemd-rc-local-generator[92836]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:26:40 compute-0 systemd-sysv-generator[92840]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:26:40 compute-0 podman[92796]: 2026-01-21 23:26:40.216313446 +0000 UTC m=+0.030819113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:40 compute-0 systemd[1]: Started libpod-conmon-176238dc26fd102a84d7ebe1516a4549cc2bc2c4c739f8ff4108a0e4d2e5278f.scope.
Jan 21 23:26:40 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v126: 102 pgs: 1 unknown, 101 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 682 B/s wr, 3 op/s
Jan 21 23:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4fe0590646fbc6216c6438daa0ae1c98ec23c9be632aa7116b48925442a077a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4fe0590646fbc6216c6438daa0ae1c98ec23c9be632aa7116b48925442a077a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:40 compute-0 systemd[1]: Reloading.
Jan 21 23:26:40 compute-0 podman[92796]: 2026-01-21 23:26:40.50737839 +0000 UTC m=+0.321884057 container init 176238dc26fd102a84d7ebe1516a4549cc2bc2c4c739f8ff4108a0e4d2e5278f (image=quay.io/ceph/ceph:v18, name=hungry_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 21 23:26:40 compute-0 podman[92796]: 2026-01-21 23:26:40.514914736 +0000 UTC m=+0.329420383 container start 176238dc26fd102a84d7ebe1516a4549cc2bc2c4c739f8ff4108a0e4d2e5278f (image=quay.io/ceph/ceph:v18, name=hungry_satoshi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 23:26:40 compute-0 podman[92796]: 2026-01-21 23:26:40.519725231 +0000 UTC m=+0.334230928 container attach 176238dc26fd102a84d7ebe1516a4549cc2bc2c4c739f8ff4108a0e4d2e5278f (image=quay.io/ceph/ceph:v18, name=hungry_satoshi, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:26:40 compute-0 systemd-rc-local-generator[92882]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:26:40 compute-0 systemd-sysv-generator[92885]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:26:40 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.quiikw for 3759241a-7f1c-520d-ba17-879943ee2f00...
Jan 21 23:26:40 compute-0 podman[92959]: 2026-01-21 23:26:40.996406694 +0000 UTC m=+0.043564224 container create 57f4f248af7db942c8266333dab95700b3cde4c8b6c75aed70592c5f65720d30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-rgw-rgw-compute-0-quiikw, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:26:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 21 23:26:41 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2811656945' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 23:26:41 compute-0 hungry_satoshi[92849]: 
Jan 21 23:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e12bed06f2780f04a3a455fed09b6fa47cd31a5998b0bc1c9c358cf5c105df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e12bed06f2780f04a3a455fed09b6fa47cd31a5998b0bc1c9c358cf5c105df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e12bed06f2780f04a3a455fed09b6fa47cd31a5998b0bc1c9c358cf5c105df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e12bed06f2780f04a3a455fed09b6fa47cd31a5998b0bc1c9c358cf5c105df/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.quiikw supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:41 compute-0 hungry_satoshi[92849]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502923980","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.quiikw","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.ekhhbx","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.eaptiy","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 21 23:26:41 compute-0 systemd[1]: libpod-176238dc26fd102a84d7ebe1516a4549cc2bc2c4c739f8ff4108a0e4d2e5278f.scope: Deactivated successfully.
Jan 21 23:26:41 compute-0 podman[92796]: 2026-01-21 23:26:41.070792171 +0000 UTC m=+0.885297848 container died 176238dc26fd102a84d7ebe1516a4549cc2bc2c4c739f8ff4108a0e4d2e5278f (image=quay.io/ceph/ceph:v18, name=hungry_satoshi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:26:41 compute-0 podman[92959]: 2026-01-21 23:26:40.975937052 +0000 UTC m=+0.023094592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:26:41 compute-0 podman[92959]: 2026-01-21 23:26:41.075720389 +0000 UTC m=+0.122877909 container init 57f4f248af7db942c8266333dab95700b3cde4c8b6c75aed70592c5f65720d30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-rgw-rgw-compute-0-quiikw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 21 23:26:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 21 23:26:41 compute-0 podman[92959]: 2026-01-21 23:26:41.086719175 +0000 UTC m=+0.133876695 container start 57f4f248af7db942c8266333dab95700b3cde4c8b6c75aed70592c5f65720d30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-rgw-rgw-compute-0-quiikw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 23:26:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.ekhhbx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 21 23:26:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 21 23:26:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 21 23:26:41 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 21 23:26:41 compute-0 bash[92959]: 57f4f248af7db942c8266333dab95700b3cde4c8b6c75aed70592c5f65720d30
Jan 21 23:26:41 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.quiikw for 3759241a-7f1c-520d-ba17-879943ee2f00.
Jan 21 23:26:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4fe0590646fbc6216c6438daa0ae1c98ec23c9be632aa7116b48925442a077a-merged.mount: Deactivated successfully.
Jan 21 23:26:41 compute-0 sudo[92670]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:26:41 compute-0 radosgw[92982]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 21 23:26:41 compute-0 radosgw[92982]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Jan 21 23:26:41 compute-0 radosgw[92982]: framework: beast
Jan 21 23:26:41 compute-0 radosgw[92982]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 21 23:26:41 compute-0 radosgw[92982]: init_numa not setting numa affinity
Jan 21 23:26:41 compute-0 ceph-mon[74318]: 4.1f scrub starts
Jan 21 23:26:41 compute-0 ceph-mon[74318]: 4.1f scrub ok
Jan 21 23:26:41 compute-0 ceph-mon[74318]: Deploying daemon rgw.rgw.compute-0.quiikw on compute-0
Jan 21 23:26:41 compute-0 ceph-mon[74318]: osdmap e42: 3 total, 3 up, 3 in
Jan 21 23:26:41 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-1.ekhhbx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 21 23:26:41 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/259375498' entity='client.rgw.rgw.compute-1.ekhhbx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 21 23:26:41 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2313988470' entity='client.rgw.rgw.compute-2.eaptiy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 21 23:26:41 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 21 23:26:41 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2811656945' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 21 23:26:41 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-1.ekhhbx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 21 23:26:41 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 21 23:26:41 compute-0 ceph-mon[74318]: osdmap e43: 3 total, 3 up, 3 in
Jan 21 23:26:41 compute-0 podman[92796]: 2026-01-21 23:26:41.14685495 +0000 UTC m=+0.961360607 container remove 176238dc26fd102a84d7ebe1516a4549cc2bc2c4c739f8ff4108a0e4d2e5278f (image=quay.io/ceph/ceph:v18, name=hungry_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 23:26:41 compute-0 systemd[1]: libpod-conmon-176238dc26fd102a84d7ebe1516a4549cc2bc2c4c739f8ff4108a0e4d2e5278f.scope: Deactivated successfully.
Jan 21 23:26:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:26:41 compute-0 sudo[92774]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 21 23:26:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:41 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev 414e3ec2-4694-42fe-9950-db1f71aa0dcf (Updating rgw.rgw deployment (+3 -> 3))
Jan 21 23:26:41 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event 414e3ec2-4694-42fe-9950-db1f71aa0dcf (Updating rgw.rgw deployment (+3 -> 3)) in 6 seconds
Jan 21 23:26:41 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 21 23:26:41 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 21 23:26:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 21 23:26:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 21 23:26:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:41 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev ca5ca510-4fc3-42cb-b9b7-70f1b2c9f6c1 (Updating mds.cephfs deployment (+3 -> 3))
Jan 21 23:26:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.kghltm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 21 23:26:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.kghltm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 21 23:26:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.kghltm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 21 23:26:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:26:41 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:41 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.kghltm on compute-2
Jan 21 23:26:41 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.kghltm on compute-2
Jan 21 23:26:41 compute-0 sudo[93079]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whramsrxapeybjvctolccdvbngvlmssy ; /usr/bin/python3'
Jan 21 23:26:41 compute-0 sudo[93079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:42 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 21 23:26:42 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 21 23:26:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 21 23:26:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 21 23:26:42 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 21 23:26:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 21 23:26:42 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.ekhhbx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 21 23:26:42 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 44 pg[10.0( empty local-lis/les=0/0 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [1] r=0 lpr=44 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 21 23:26:42 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 21 23:26:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 21 23:26:42 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-0.quiikw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 21 23:26:42 compute-0 python3[93081]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:42 compute-0 ceph-mon[74318]: pgmap v126: 102 pgs: 1 unknown, 101 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 682 B/s wr, 3 op/s
Jan 21 23:26:42 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:42 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:42 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:42 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:42 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:42 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.kghltm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 21 23:26:42 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.kghltm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 21 23:26:42 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:42 compute-0 ceph-mon[74318]: 4.b scrub starts
Jan 21 23:26:42 compute-0 ceph-mon[74318]: 4.b scrub ok
Jan 21 23:26:42 compute-0 ceph-mon[74318]: 4.13 scrub starts
Jan 21 23:26:42 compute-0 ceph-mon[74318]: 4.13 scrub ok
Jan 21 23:26:42 compute-0 ceph-mon[74318]: osdmap e44: 3 total, 3 up, 3 in
Jan 21 23:26:42 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-1.ekhhbx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 21 23:26:42 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2313988470' entity='client.rgw.rgw.compute-2.eaptiy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 21 23:26:42 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/259375498' entity='client.rgw.rgw.compute-1.ekhhbx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 21 23:26:42 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 21 23:26:42 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2393786590' entity='client.rgw.rgw.compute-0.quiikw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 21 23:26:42 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-0.quiikw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 21 23:26:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:26:42 compute-0 podman[93082]: 2026-01-21 23:26:42.215637451 +0000 UTC m=+0.053262857 container create b2734fd81c74e8d4387467a349247afa22f13489244c7b484a6e7ee5e1d9eccd (image=quay.io/ceph/ceph:v18, name=gracious_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:26:42 compute-0 systemd[1]: Started libpod-conmon-b2734fd81c74e8d4387467a349247afa22f13489244c7b484a6e7ee5e1d9eccd.scope.
Jan 21 23:26:42 compute-0 podman[93082]: 2026-01-21 23:26:42.186770569 +0000 UTC m=+0.024396055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:42 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18e50ee405172b29ee328dca83404e7a07ecf48727e4a9469e7806fb1baecd9b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18e50ee405172b29ee328dca83404e7a07ecf48727e4a9469e7806fb1baecd9b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:42 compute-0 podman[93082]: 2026-01-21 23:26:42.310384326 +0000 UTC m=+0.148009742 container init b2734fd81c74e8d4387467a349247afa22f13489244c7b484a6e7ee5e1d9eccd (image=quay.io/ceph/ceph:v18, name=gracious_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Jan 21 23:26:42 compute-0 podman[93082]: 2026-01-21 23:26:42.321399692 +0000 UTC m=+0.159025088 container start b2734fd81c74e8d4387467a349247afa22f13489244c7b484a6e7ee5e1d9eccd (image=quay.io/ceph/ceph:v18, name=gracious_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 23:26:42 compute-0 podman[93082]: 2026-01-21 23:26:42.324266997 +0000 UTC m=+0.161892413 container attach b2734fd81c74e8d4387467a349247afa22f13489244c7b484a6e7ee5e1d9eccd (image=quay.io/ceph/ceph:v18, name=gracious_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 23:26:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v129: 103 pgs: 2 unknown, 101 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 21 23:26:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Jan 21 23:26:42 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2283548860' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 21 23:26:42 compute-0 gracious_greider[93097]: mimic
Jan 21 23:26:42 compute-0 systemd[1]: libpod-b2734fd81c74e8d4387467a349247afa22f13489244c7b484a6e7ee5e1d9eccd.scope: Deactivated successfully.
Jan 21 23:26:42 compute-0 podman[93122]: 2026-01-21 23:26:42.934453085 +0000 UTC m=+0.027481466 container died b2734fd81c74e8d4387467a349247afa22f13489244c7b484a6e7ee5e1d9eccd (image=quay.io/ceph/ceph:v18, name=gracious_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 23:26:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-18e50ee405172b29ee328dca83404e7a07ecf48727e4a9469e7806fb1baecd9b-merged.mount: Deactivated successfully.
Jan 21 23:26:42 compute-0 podman[93122]: 2026-01-21 23:26:42.972244869 +0000 UTC m=+0.065273220 container remove b2734fd81c74e8d4387467a349247afa22f13489244c7b484a6e7ee5e1d9eccd (image=quay.io/ceph/ceph:v18, name=gracious_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 21 23:26:42 compute-0 systemd[1]: libpod-conmon-b2734fd81c74e8d4387467a349247afa22f13489244c7b484a6e7ee5e1d9eccd.scope: Deactivated successfully.
Jan 21 23:26:42 compute-0 sudo[93079]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 21 23:26:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.ekhhbx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 21 23:26:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 21 23:26:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-0.quiikw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 21 23:26:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 21 23:26:43 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 21 23:26:43 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 45 pg[10.0( empty local-lis/les=44/45 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [1] r=0 lpr=44 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:43 compute-0 ceph-mon[74318]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 21 23:26:43 compute-0 ceph-mon[74318]: Deploying daemon mds.cephfs.compute-2.kghltm on compute-2
Jan 21 23:26:43 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2283548860' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 21 23:26:43 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-1.ekhhbx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 21 23:26:43 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 21 23:26:43 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-0.quiikw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 21 23:26:43 compute-0 ceph-mon[74318]: osdmap e45: 3 total, 3 up, 3 in
Jan 21 23:26:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:26:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:26:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 21 23:26:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zcqesz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 21 23:26:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zcqesz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 21 23:26:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zcqesz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 21 23:26:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:26:43 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:43 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.zcqesz on compute-0
Jan 21 23:26:43 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.zcqesz on compute-0
Jan 21 23:26:43 compute-0 sudo[93154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:43 compute-0 sudo[93154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:43 compute-0 sudo[93154]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:43 compute-0 sudo[93179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:26:43 compute-0 sudo[93179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:43 compute-0 sudo[93179]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:43 compute-0 sudo[93204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:43 compute-0 sudo[93204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:43 compute-0 sudo[93204]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:43 compute-0 sudo[93229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:26:43 compute-0 sudo[93229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:43 compute-0 sudo[93277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvdkakuyxwutowevjewxehvsafvdfjkr ; /usr/bin/python3'
Jan 21 23:26:43 compute-0 sudo[93277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:26:43 compute-0 python3[93280]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:26:43 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 21 23:26:43 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 21 23:26:44 compute-0 podman[93321]: 2026-01-21 23:26:44.018638677 +0000 UTC m=+0.044949911 container create c7141b9a87ba94be42aa2168c8b10e23a83faab89c41ec1e1bb5ce464f6b30a7 (image=quay.io/ceph/ceph:v18, name=unruffled_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 21 23:26:44 compute-0 podman[93322]: 2026-01-21 23:26:44.047319973 +0000 UTC m=+0.063801032 container create 5d6faa3dde6fe83565ed82ab607188b6741a5019eeef31dc005cd9eb6d875039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cohen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:26:44 compute-0 systemd[1]: Started libpod-conmon-c7141b9a87ba94be42aa2168c8b10e23a83faab89c41ec1e1bb5ce464f6b30a7.scope.
Jan 21 23:26:44 compute-0 systemd[1]: Started libpod-conmon-5d6faa3dde6fe83565ed82ab607188b6741a5019eeef31dc005cd9eb6d875039.scope.
Jan 21 23:26:44 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f4b31182c45d289b87a12331b4696466655ff5fb6dc5d3cdcf36c61ba5eb3c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f4b31182c45d289b87a12331b4696466655ff5fb6dc5d3cdcf36c61ba5eb3c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:44 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:44 compute-0 podman[93321]: 2026-01-21 23:26:44.002301112 +0000 UTC m=+0.028612366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:26:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 21 23:26:44 compute-0 podman[93322]: 2026-01-21 23:26:44.110182079 +0000 UTC m=+0.126663108 container init 5d6faa3dde6fe83565ed82ab607188b6741a5019eeef31dc005cd9eb6d875039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cohen, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:26:44 compute-0 podman[93322]: 2026-01-21 23:26:44.016544403 +0000 UTC m=+0.033025432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:26:44 compute-0 podman[93321]: 2026-01-21 23:26:44.115373073 +0000 UTC m=+0.141684337 container init c7141b9a87ba94be42aa2168c8b10e23a83faab89c41ec1e1bb5ce464f6b30a7 (image=quay.io/ceph/ceph:v18, name=unruffled_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 23:26:44 compute-0 podman[93322]: 2026-01-21 23:26:44.118532596 +0000 UTC m=+0.135013635 container start 5d6faa3dde6fe83565ed82ab607188b6741a5019eeef31dc005cd9eb6d875039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 21 23:26:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 21 23:26:44 compute-0 clever_cohen[93356]: 167 167
Jan 21 23:26:44 compute-0 podman[93322]: 2026-01-21 23:26:44.123923786 +0000 UTC m=+0.140404825 container attach 5d6faa3dde6fe83565ed82ab607188b6741a5019eeef31dc005cd9eb6d875039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 23:26:44 compute-0 podman[93322]: 2026-01-21 23:26:44.124715207 +0000 UTC m=+0.141196246 container died 5d6faa3dde6fe83565ed82ab607188b6741a5019eeef31dc005cd9eb6d875039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:26:44 compute-0 podman[93321]: 2026-01-21 23:26:44.124898172 +0000 UTC m=+0.151209406 container start c7141b9a87ba94be42aa2168c8b10e23a83faab89c41ec1e1bb5ce464f6b30a7 (image=quay.io/ceph/ceph:v18, name=unruffled_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 23:26:44 compute-0 systemd[1]: libpod-5d6faa3dde6fe83565ed82ab607188b6741a5019eeef31dc005cd9eb6d875039.scope: Deactivated successfully.
Jan 21 23:26:44 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 21 23:26:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 21 23:26:44 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4026277441' entity='client.rgw.rgw.compute-0.quiikw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 21 23:26:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 21 23:26:44 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.ekhhbx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 21 23:26:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 21 23:26:44 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 21 23:26:44 compute-0 podman[93321]: 2026-01-21 23:26:44.137639993 +0000 UTC m=+0.163951237 container attach c7141b9a87ba94be42aa2168c8b10e23a83faab89c41ec1e1bb5ce464f6b30a7 (image=quay.io/ceph/ceph:v18, name=unruffled_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 21 23:26:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-31f48a41840552901547cad08e4019fc7486f69d823cd818aa8766f97d87b32b-merged.mount: Deactivated successfully.
Jan 21 23:26:44 compute-0 podman[93322]: 2026-01-21 23:26:44.168220589 +0000 UTC m=+0.184701618 container remove 5d6faa3dde6fe83565ed82ab607188b6741a5019eeef31dc005cd9eb6d875039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 23:26:44 compute-0 systemd[1]: libpod-conmon-5d6faa3dde6fe83565ed82ab607188b6741a5019eeef31dc005cd9eb6d875039.scope: Deactivated successfully.
Jan 21 23:26:44 compute-0 ceph-mon[74318]: pgmap v129: 103 pgs: 2 unknown, 101 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 21 23:26:44 compute-0 ceph-mon[74318]: 3.1a scrub starts
Jan 21 23:26:44 compute-0 ceph-mon[74318]: 3.1a scrub ok
Jan 21 23:26:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:44 compute-0 ceph-mon[74318]: 4.f scrub starts
Jan 21 23:26:44 compute-0 ceph-mon[74318]: 4.f scrub ok
Jan 21 23:26:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zcqesz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 21 23:26:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zcqesz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 21 23:26:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:44 compute-0 ceph-mon[74318]: Deploying daemon mds.cephfs.compute-0.zcqesz on compute-0
Jan 21 23:26:44 compute-0 ceph-mon[74318]: 3.f scrub starts
Jan 21 23:26:44 compute-0 ceph-mon[74318]: 3.f scrub ok
Jan 21 23:26:44 compute-0 ceph-mon[74318]: osdmap e46: 3 total, 3 up, 3 in
Jan 21 23:26:44 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/4026277441' entity='client.rgw.rgw.compute-0.quiikw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 21 23:26:44 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-1.ekhhbx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 21 23:26:44 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1044773165' entity='client.rgw.rgw.compute-2.eaptiy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 21 23:26:44 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 21 23:26:44 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2467574263' entity='client.rgw.rgw.compute-1.ekhhbx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 21 23:26:44 compute-0 systemd[1]: Reloading.
Jan 21 23:26:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e3 new map
Jan 21 23:26:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-21T23:26:19.155977+0000
                                           modified        2026-01-21T23:26:19.156015+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.kghltm{-1:24146} state up:standby seq 1 addr [v2:192.168.122.102:6804/153885020,v1:192.168.122.102:6805/153885020] compat {c=[1],r=[1],i=[7ff]}]
Jan 21 23:26:44 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/153885020,v1:192.168.122.102:6805/153885020] up:boot
Jan 21 23:26:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/153885020,v1:192.168.122.102:6805/153885020] as mds.0
Jan 21 23:26:44 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.kghltm assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 21 23:26:44 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 21 23:26:44 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 21 23:26:44 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 21 23:26:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.kghltm"} v 0) v1
Jan 21 23:26:44 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.kghltm"}]: dispatch
Jan 21 23:26:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e3 all = 0
Jan 21 23:26:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e4 new map
Jan 21 23:26:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-21T23:26:19.155977+0000
                                           modified        2026-01-21T23:26:44.229613+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24146}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.kghltm{0:24146} state up:creating seq 1 addr [v2:192.168.122.102:6804/153885020,v1:192.168.122.102:6805/153885020] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Jan 21 23:26:44 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.kghltm=up:creating}
Jan 21 23:26:44 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.kghltm is now active in filesystem cephfs as rank 0
Jan 21 23:26:44 compute-0 systemd-rc-local-generator[93402]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:26:44 compute-0 systemd-sysv-generator[93405]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:26:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v132: 104 pgs: 2 unknown, 102 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:44 compute-0 systemd[1]: Reloading.
Jan 21 23:26:44 compute-0 systemd-sysv-generator[93461]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:26:44 compute-0 systemd-rc-local-generator[93456]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:26:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Jan 21 23:26:44 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/994836975' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 21 23:26:44 compute-0 unruffled_bose[93351]: 
Jan 21 23:26:44 compute-0 unruffled_bose[93351]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":10}}
Jan 21 23:26:44 compute-0 podman[93321]: 2026-01-21 23:26:44.77542582 +0000 UTC m=+0.801737144 container died c7141b9a87ba94be42aa2168c8b10e23a83faab89c41ec1e1bb5ce464f6b30a7 (image=quay.io/ceph/ceph:v18, name=unruffled_bose, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:26:44 compute-0 systemd[1]: libpod-c7141b9a87ba94be42aa2168c8b10e23a83faab89c41ec1e1bb5ce464f6b30a7.scope: Deactivated successfully.
Jan 21 23:26:44 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.zcqesz for 3759241a-7f1c-520d-ba17-879943ee2f00...
Jan 21 23:26:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-21f4b31182c45d289b87a12331b4696466655ff5fb6dc5d3cdcf36c61ba5eb3c-merged.mount: Deactivated successfully.
Jan 21 23:26:44 compute-0 podman[93321]: 2026-01-21 23:26:44.845145594 +0000 UTC m=+0.871456828 container remove c7141b9a87ba94be42aa2168c8b10e23a83faab89c41ec1e1bb5ce464f6b30a7 (image=quay.io/ceph/ceph:v18, name=unruffled_bose, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Jan 21 23:26:44 compute-0 systemd[1]: libpod-conmon-c7141b9a87ba94be42aa2168c8b10e23a83faab89c41ec1e1bb5ce464f6b30a7.scope: Deactivated successfully.
Jan 21 23:26:44 compute-0 sudo[93277]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:45 compute-0 podman[93532]: 2026-01-21 23:26:45.095583581 +0000 UTC m=+0.043146754 container create 30525f847801442be6c7f21b17bccaa7bbaf79ebbb2972fae21624a71c93d472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mds-cephfs-compute-0-zcqesz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4026277441' entity='client.rgw.rgw.compute-0.quiikw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.ekhhbx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 21 23:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e8bf512dde0c4a07fe9705d0889258963a963a9bde8a441b527b870b1fc74b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4026277441' entity='client.rgw.rgw.compute-0.quiikw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 21 23:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e8bf512dde0c4a07fe9705d0889258963a963a9bde8a441b527b870b1fc74b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e8bf512dde0c4a07fe9705d0889258963a963a9bde8a441b527b870b1fc74b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e8bf512dde0c4a07fe9705d0889258963a963a9bde8a441b527b870b1fc74b/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.zcqesz supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.ekhhbx' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 21 23:26:45 compute-0 podman[93532]: 2026-01-21 23:26:45.14897161 +0000 UTC m=+0.096534803 container init 30525f847801442be6c7f21b17bccaa7bbaf79ebbb2972fae21624a71c93d472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mds-cephfs-compute-0-zcqesz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:26:45 compute-0 podman[93532]: 2026-01-21 23:26:45.154492734 +0000 UTC m=+0.102055907 container start 30525f847801442be6c7f21b17bccaa7bbaf79ebbb2972fae21624a71c93d472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mds-cephfs-compute-0-zcqesz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 21 23:26:45 compute-0 bash[93532]: 30525f847801442be6c7f21b17bccaa7bbaf79ebbb2972fae21624a71c93d472
Jan 21 23:26:45 compute-0 podman[93532]: 2026-01-21 23:26:45.077950562 +0000 UTC m=+0.025513755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:26:45 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.zcqesz for 3759241a-7f1c-520d-ba17-879943ee2f00.
Jan 21 23:26:45 compute-0 ceph-mds[93551]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 23:26:45 compute-0 ceph-mds[93551]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Jan 21 23:26:45 compute-0 ceph-mds[93551]: main not setting numa affinity
Jan 21 23:26:45 compute-0 ceph-mds[93551]: pidfile_write: ignore empty --pid-file
Jan 21 23:26:45 compute-0 sudo[93229]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:45 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mds-cephfs-compute-0-zcqesz[93547]: starting mds.cephfs.compute-0.zcqesz at 
Jan 21 23:26:45 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz Updating MDS map to version 4 from mon.0
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mds.? [v2:192.168.122.102:6804/153885020,v1:192.168.122.102:6805/153885020] up:boot
Jan 21 23:26:45 compute-0 ceph-mon[74318]: daemon mds.cephfs.compute-2.kghltm assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 21 23:26:45 compute-0 ceph-mon[74318]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 21 23:26:45 compute-0 ceph-mon[74318]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 21 23:26:45 compute-0 ceph-mon[74318]: fsmap cephfs:0 1 up:standby
Jan 21 23:26:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.kghltm"}]: dispatch
Jan 21 23:26:45 compute-0 ceph-mon[74318]: fsmap cephfs:1 {0=cephfs.compute-2.kghltm=up:creating}
Jan 21 23:26:45 compute-0 ceph-mon[74318]: daemon mds.cephfs.compute-2.kghltm is now active in filesystem cephfs as rank 0
Jan 21 23:26:45 compute-0 ceph-mon[74318]: pgmap v132: 104 pgs: 2 unknown, 102 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:45 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/994836975' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 21 23:26:45 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/4026277441' entity='client.rgw.rgw.compute-0.quiikw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 21 23:26:45 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-1.ekhhbx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 21 23:26:45 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 21 23:26:45 compute-0 ceph-mon[74318]: osdmap e47: 3 total, 3 up, 3 in
Jan 21 23:26:45 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/4026277441' entity='client.rgw.rgw.compute-0.quiikw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 21 23:26:45 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1044773165' entity='client.rgw.rgw.compute-2.eaptiy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 21 23:26:45 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-1.ekhhbx' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 21 23:26:45 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 21 23:26:45 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2467574263' entity='client.rgw.rgw.compute-1.ekhhbx' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 21 23:26:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e5 new map
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-21T23:26:19.155977+0000
                                           modified        2026-01-21T23:26:45.242202+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24146}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.kghltm{0:24146} state up:active seq 2 addr [v2:192.168.122.102:6804/153885020,v1:192.168.122.102:6805/153885020] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zcqesz{-1:14400} state up:standby seq 1 addr [v2:192.168.122.100:6806/2304131850,v1:192.168.122.100:6807/2304131850] compat {c=[1],r=[1],i=[7ff]}]
Jan 21 23:26:45 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz Updating MDS map to version 5 from mon.0
Jan 21 23:26:45 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz Monitors have assigned me to become a standby.
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/153885020,v1:192.168.122.102:6805/153885020] up:active
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2304131850,v1:192.168.122.100:6807/2304131850] up:boot
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.kghltm=up:active} 1 up:standby
Jan 21 23:26:45 compute-0 ceph-mgr[74614]: [progress INFO root] Writing back 10 completed events
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.zcqesz"} v 0) v1
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.zcqesz"}]: dispatch
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e5 all = 0
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.cwcbdu", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.cwcbdu", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e6 new map
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-21T23:26:19.155977+0000
                                           modified        2026-01-21T23:26:45.242202+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24146}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.kghltm{0:24146} state up:active seq 2 addr [v2:192.168.122.102:6804/153885020,v1:192.168.122.102:6805/153885020] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zcqesz{-1:14400} state up:standby seq 1 addr [v2:192.168.122.100:6806/2304131850,v1:192.168.122.100:6807/2304131850] compat {c=[1],r=[1],i=[7ff]}]
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.kghltm=up:active} 1 up:standby
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.cwcbdu", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:26:45 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:45 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.cwcbdu on compute-1
Jan 21 23:26:45 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.cwcbdu on compute-1
Jan 21 23:26:45 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 21 23:26:45 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 21 23:26:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 21 23:26:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4026277441' entity='client.rgw.rgw.compute-0.quiikw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 21 23:26:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.ekhhbx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 21 23:26:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 21 23:26:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 21 23:26:46 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v135: 104 pgs: 1 creating+peering, 103 active+clean; 452 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 4.0 KiB/s wr, 16 op/s
Jan 21 23:26:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:46 compute-0 ceph-mon[74318]: mds.? [v2:192.168.122.102:6804/153885020,v1:192.168.122.102:6805/153885020] up:active
Jan 21 23:26:46 compute-0 ceph-mon[74318]: mds.? [v2:192.168.122.100:6806/2304131850,v1:192.168.122.100:6807/2304131850] up:boot
Jan 21 23:26:46 compute-0 ceph-mon[74318]: fsmap cephfs:1 {0=cephfs.compute-2.kghltm=up:active} 1 up:standby
Jan 21 23:26:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.zcqesz"}]: dispatch
Jan 21 23:26:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.cwcbdu", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 21 23:26:46 compute-0 ceph-mon[74318]: fsmap cephfs:1 {0=cephfs.compute-2.kghltm=up:active} 1 up:standby
Jan 21 23:26:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.cwcbdu", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 21 23:26:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:26:46 compute-0 ceph-mon[74318]: 4.a scrub starts
Jan 21 23:26:46 compute-0 ceph-mon[74318]: 4.a scrub ok
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 1)
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 1)
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010905220547180347 quantized to 32 (current 1)
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:26:46 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 23:26:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Jan 21 23:26:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:26:46 compute-0 radosgw[92982]: LDAP not started since no server URIs were provided in the configuration.
Jan 21 23:26:46 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-rgw-rgw-compute-0-quiikw[92975]: 2026-01-21T23:26:46.993+0000 7fd7ec90c940 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 21 23:26:46 compute-0 radosgw[92982]: framework: beast
Jan 21 23:26:46 compute-0 radosgw[92982]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 21 23:26:46 compute-0 radosgw[92982]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 21 23:26:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 21 23:26:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 21 23:26:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 21 23:26:47 compute-0 radosgw[92982]: starting handler: beast
Jan 21 23:26:47 compute-0 radosgw[92982]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 23:26:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 21 23:26:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 21 23:26:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 21 23:26:47 compute-0 radosgw[92982]: mgrc service_daemon_register rgw.14385 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.quiikw,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864308,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=119ac409-cea3-4b48-b48a-5051c5c4d377,zone_name=default,zonegroup_id=42d9c425-4262-4689-a866-1ca90bbbeea9,zonegroup_name=default}
Jan 21 23:26:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 21 23:26:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 21 23:26:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 21 23:26:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:26:47 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 21 23:26:47 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 21 23:26:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 21 23:26:47 compute-0 ceph-mon[74318]: Deploying daemon mds.cephfs.compute-1.cwcbdu on compute-1
Jan 21 23:26:47 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/4026277441' entity='client.rgw.rgw.compute-0.quiikw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 21 23:26:47 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-1.ekhhbx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 21 23:26:47 compute-0 ceph-mon[74318]: from='client.? ' entity='client.rgw.rgw.compute-2.eaptiy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 21 23:26:47 compute-0 ceph-mon[74318]: osdmap e48: 3 total, 3 up, 3 in
Jan 21 23:26:47 compute-0 ceph-mon[74318]: pgmap v135: 104 pgs: 1 creating+peering, 103 active+clean; 452 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 4.0 KiB/s wr, 16 op/s
Jan 21 23:26:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:26:47 compute-0 ceph-mon[74318]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 21 23:26:47 compute-0 ceph-mon[74318]: Cluster is now healthy
Jan 21 23:26:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:26:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 21 23:26:47 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 21 23:26:47 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev 81638e6f-2558-41a7-976c-dadf7ce51f29 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 21 23:26:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Jan 21 23:26:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 21 23:26:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:26:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:26:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 21 23:26:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:47 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev ca5ca510-4fc3-42cb-b9b7-70f1b2c9f6c1 (Updating mds.cephfs deployment (+3 -> 3))
Jan 21 23:26:47 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event ca5ca510-4fc3-42cb-b9b7-70f1b2c9f6c1 (Updating mds.cephfs deployment (+3 -> 3)) in 7 seconds
Jan 21 23:26:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Jan 21 23:26:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 21 23:26:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:47 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev 1c0a765a-6496-4aaf-b0e6-a98a360adffe (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 21 23:26:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0) v1
Jan 21 23:26:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:47 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 21 23:26:47 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 21 23:26:47 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.xtqnkr on compute-0
Jan 21 23:26:47 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.xtqnkr on compute-0
Jan 21 23:26:47 compute-0 sudo[94114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:47 compute-0 sudo[94114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:47 compute-0 sudo[94114]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:48 compute-0 sudo[94139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:26:48 compute-0 sudo[94139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:48 compute-0 sudo[94139]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:48 compute-0 sudo[94164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:26:48 compute-0 sudo[94164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:48 compute-0 sudo[94164]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:48 compute-0 sudo[94189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:26:48 compute-0 sudo[94189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:26:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v137: 104 pgs: 1 creating+peering, 103 active+clean; 452 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 3.4 KiB/s rd, 3.7 KiB/s wr, 15 op/s
Jan 21 23:26:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 21 23:26:48 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 21 23:26:48 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:26:48 compute-0 ceph-mon[74318]: osdmap e49: 3 total, 3 up, 3 in
Jan 21 23:26:48 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 21 23:26:48 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:48 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:48 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:48 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:48 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:48 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:48 compute-0 ceph-mon[74318]: 3.c scrub starts
Jan 21 23:26:48 compute-0 ceph-mon[74318]: 3.c scrub ok
Jan 21 23:26:48 compute-0 ceph-mon[74318]: Deploying daemon haproxy.rgw.default.compute-0.xtqnkr on compute-0
Jan 21 23:26:48 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:48 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 21 23:26:48 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:26:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 21 23:26:48 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 21 23:26:48 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev a80baf03-7b29-47c1-a9e1-cd31e5e53923 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 21 23:26:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Jan 21 23:26:48 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:26:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e7 new map
Jan 21 23:26:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-21T23:26:19.155977+0000
                                           modified        2026-01-21T23:26:48.800453+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24146}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.kghltm{0:24146} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/153885020,v1:192.168.122.102:6805/153885020] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zcqesz{-1:14400} state up:standby seq 1 addr [v2:192.168.122.100:6806/2304131850,v1:192.168.122.100:6807/2304131850] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.cwcbdu{-1:24164} state up:standby seq 1 addr [v2:192.168.122.101:6804/2307437297,v1:192.168.122.101:6805/2307437297] compat {c=[1],r=[1],i=[7ff]}]
Jan 21 23:26:48 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2307437297,v1:192.168.122.101:6805/2307437297] up:boot
Jan 21 23:26:48 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/153885020,v1:192.168.122.102:6805/153885020] up:active
Jan 21 23:26:48 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.kghltm=up:active} 2 up:standby
Jan 21 23:26:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.cwcbdu"} v 0) v1
Jan 21 23:26:48 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.cwcbdu"}]: dispatch
Jan 21 23:26:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e7 all = 0
Jan 21 23:26:48 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 21 23:26:48 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 21 23:26:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 21 23:26:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:26:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 21 23:26:49 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 21 23:26:49 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev 9b008376-5d93-4957-b67c-05ed7593938b (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 21 23:26:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Jan 21 23:26:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:26:49 compute-0 ceph-mon[74318]: pgmap v137: 104 pgs: 1 creating+peering, 103 active+clean; 452 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 3.4 KiB/s rd, 3.7 KiB/s wr, 15 op/s
Jan 21 23:26:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 21 23:26:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:26:49 compute-0 ceph-mon[74318]: osdmap e50: 3 total, 3 up, 3 in
Jan 21 23:26:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:26:49 compute-0 ceph-mon[74318]: mds.? [v2:192.168.122.101:6804/2307437297,v1:192.168.122.101:6805/2307437297] up:boot
Jan 21 23:26:49 compute-0 ceph-mon[74318]: mds.? [v2:192.168.122.102:6804/153885020,v1:192.168.122.102:6805/153885020] up:active
Jan 21 23:26:49 compute-0 ceph-mon[74318]: fsmap cephfs:1 {0=cephfs.compute-2.kghltm=up:active} 2 up:standby
Jan 21 23:26:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.cwcbdu"}]: dispatch
Jan 21 23:26:49 compute-0 ceph-mon[74318]: 3.d scrub starts
Jan 21 23:26:49 compute-0 ceph-mon[74318]: 3.d scrub ok
Jan 21 23:26:49 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 21 23:26:49 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 21 23:26:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e8 new map
Jan 21 23:26:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-21T23:26:19.155977+0000
                                           modified        2026-01-21T23:26:48.800453+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24146}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.kghltm{0:24146} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/153885020,v1:192.168.122.102:6805/153885020] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zcqesz{-1:14400} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2304131850,v1:192.168.122.100:6807/2304131850] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.cwcbdu{-1:24164} state up:standby seq 1 addr [v2:192.168.122.101:6804/2307437297,v1:192.168.122.101:6805/2307437297] compat {c=[1],r=[1],i=[7ff]}]
Jan 21 23:26:49 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz Updating MDS map to version 8 from mon.0
Jan 21 23:26:49 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2304131850,v1:192.168.122.100:6807/2304131850] up:standby
Jan 21 23:26:49 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.kghltm=up:active} 2 up:standby
Jan 21 23:26:50 compute-0 ceph-mgr[74614]: [progress INFO root] Writing back 11 completed events
Jan 21 23:26:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 21 23:26:50 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v140: 135 pgs: 31 unknown, 104 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 391 KiB/s rd, 8.0 KiB/s wr, 713 op/s
Jan 21 23:26:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 21 23:26:50 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Jan 21 23:26:50 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 21 23:26:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 21 23:26:50 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:26:50 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:26:50 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 21 23:26:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 21 23:26:50 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 21 23:26:50 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev 7ad24ca0-0711-4d52-8c3f-2cb110688e7f (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 21 23:26:50 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 52 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=52 pruub=14.507221222s) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active pruub 116.291244507s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Jan 21 23:26:50 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:26:50 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 52 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=52 pruub=14.507221222s) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown pruub 116.291244507s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:50 compute-0 ceph-mon[74318]: 4.10 scrub starts
Jan 21 23:26:50 compute-0 ceph-mon[74318]: 4.10 scrub ok
Jan 21 23:26:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:26:50 compute-0 ceph-mon[74318]: osdmap e51: 3 total, 3 up, 3 in
Jan 21 23:26:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:26:50 compute-0 ceph-mon[74318]: 3.a scrub starts
Jan 21 23:26:50 compute-0 ceph-mon[74318]: 3.a scrub ok
Jan 21 23:26:50 compute-0 ceph-mon[74318]: mds.? [v2:192.168.122.100:6806/2304131850,v1:192.168.122.100:6807/2304131850] up:standby
Jan 21 23:26:50 compute-0 ceph-mon[74318]: fsmap cephfs:1 {0=cephfs.compute-2.kghltm=up:active} 2 up:standby
Jan 21 23:26:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 21 23:26:51 compute-0 podman[94255]: 2026-01-21 23:26:51.80046649 +0000 UTC m=+3.277340771 container create 7211e0790b8c16c4b77aa220e6c399e65d050a835949f3f7f55e8ab1fe184e20 (image=quay.io/ceph/haproxy:2.3, name=blissful_lalande)
Jan 21 23:26:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 21 23:26:51 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:26:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 21 23:26:51 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.1e( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.1d( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.16( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.12( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.10( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.17( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.14( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.b( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.7( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.d( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev 3fd8417e-0b15-4b84-a0d8-16bebeffaab1 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.19( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:51 compute-0 systemd[1]: Started libpod-conmon-7211e0790b8c16c4b77aa220e6c399e65d050a835949f3f7f55e8ab1fe184e20.scope.
Jan 21 23:26:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Jan 21 23:26:51 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.1e( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.1d( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.16( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.12( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.10( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.14( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.17( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.b( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.7( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.0( empty local-lis/les=52/53 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.d( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.19( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:51 compute-0 ceph-mon[74318]: pgmap v140: 135 pgs: 31 unknown, 104 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 391 KiB/s rd, 8.0 KiB/s wr, 713 op/s
Jan 21 23:26:51 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:26:51 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:26:51 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 21 23:26:51 compute-0 ceph-mon[74318]: osdmap e52: 3 total, 3 up, 3 in
Jan 21 23:26:51 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:26:51 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:26:51 compute-0 ceph-mon[74318]: osdmap e53: 3 total, 3 up, 3 in
Jan 21 23:26:51 compute-0 podman[94255]: 2026-01-21 23:26:51.781338162 +0000 UTC m=+3.258212483 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 21 23:26:51 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:26:51 compute-0 podman[94255]: 2026-01-21 23:26:51.890127193 +0000 UTC m=+3.367001474 container init 7211e0790b8c16c4b77aa220e6c399e65d050a835949f3f7f55e8ab1fe184e20 (image=quay.io/ceph/haproxy:2.3, name=blissful_lalande)
Jan 21 23:26:51 compute-0 podman[94255]: 2026-01-21 23:26:51.897458024 +0000 UTC m=+3.374332305 container start 7211e0790b8c16c4b77aa220e6c399e65d050a835949f3f7f55e8ab1fe184e20 (image=quay.io/ceph/haproxy:2.3, name=blissful_lalande)
Jan 21 23:26:51 compute-0 podman[94255]: 2026-01-21 23:26:51.900627707 +0000 UTC m=+3.377501978 container attach 7211e0790b8c16c4b77aa220e6c399e65d050a835949f3f7f55e8ab1fe184e20 (image=quay.io/ceph/haproxy:2.3, name=blissful_lalande)
Jan 21 23:26:51 compute-0 blissful_lalande[94370]: 0 0
Jan 21 23:26:51 compute-0 systemd[1]: libpod-7211e0790b8c16c4b77aa220e6c399e65d050a835949f3f7f55e8ab1fe184e20.scope: Deactivated successfully.
Jan 21 23:26:51 compute-0 podman[94255]: 2026-01-21 23:26:51.903015519 +0000 UTC m=+3.379889820 container died 7211e0790b8c16c4b77aa220e6c399e65d050a835949f3f7f55e8ab1fe184e20 (image=quay.io/ceph/haproxy:2.3, name=blissful_lalande)
Jan 21 23:26:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2d9e540b43a6a2fe96de5b4b3db8f537cd6dc1812d7949714366adc7a1aeefe-merged.mount: Deactivated successfully.
Jan 21 23:26:51 compute-0 podman[94255]: 2026-01-21 23:26:51.943098671 +0000 UTC m=+3.419972952 container remove 7211e0790b8c16c4b77aa220e6c399e65d050a835949f3f7f55e8ab1fe184e20 (image=quay.io/ceph/haproxy:2.3, name=blissful_lalande)
Jan 21 23:26:51 compute-0 systemd[1]: libpod-conmon-7211e0790b8c16c4b77aa220e6c399e65d050a835949f3f7f55e8ab1fe184e20.scope: Deactivated successfully.
Jan 21 23:26:52 compute-0 systemd[1]: Reloading.
Jan 21 23:26:52 compute-0 systemd-rc-local-generator[94419]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:26:52 compute-0 systemd-sysv-generator[94422]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:26:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:26:52 compute-0 systemd[1]: Reloading.
Jan 21 23:26:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e9 new map
Jan 21 23:26:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-21T23:26:19.155977+0000
                                           modified        2026-01-21T23:26:48.800453+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24146}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.kghltm{0:24146} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/153885020,v1:192.168.122.102:6805/153885020] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zcqesz{-1:14400} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2304131850,v1:192.168.122.100:6807/2304131850] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.cwcbdu{-1:24164} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2307437297,v1:192.168.122.101:6805/2307437297] compat {c=[1],r=[1],i=[7ff]}]
Jan 21 23:26:52 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2307437297,v1:192.168.122.101:6805/2307437297] up:standby
Jan 21 23:26:52 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.kghltm=up:active} 2 up:standby
Jan 21 23:26:52 compute-0 systemd-rc-local-generator[94458]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:26:52 compute-0 systemd-sysv-generator[94462]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:26:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v143: 181 pgs: 46 unknown, 135 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 391 KiB/s rd, 8.0 KiB/s wr, 714 op/s
Jan 21 23:26:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 21 23:26:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 21 23:26:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:52 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.xtqnkr for 3759241a-7f1c-520d-ba17-879943ee2f00...
Jan 21 23:26:52 compute-0 podman[94515]: 2026-01-21 23:26:52.765480001 +0000 UTC m=+0.043397541 container create fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 21 23:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1998de5086da939934eea0ec7020a5e87de6f468febcf7f23a4a709130b5f1dc/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 21 23:26:52 compute-0 podman[94515]: 2026-01-21 23:26:52.822892484 +0000 UTC m=+0.100810114 container init fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 21 23:26:52 compute-0 podman[94515]: 2026-01-21 23:26:52.827704739 +0000 UTC m=+0.105622309 container start fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 21 23:26:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 21 23:26:52 compute-0 bash[94515]: fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe
Jan 21 23:26:52 compute-0 podman[94515]: 2026-01-21 23:26:52.743773125 +0000 UTC m=+0.021690685 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 21 23:26:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:26:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:26:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:26:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 21 23:26:52 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.xtqnkr for 3759241a-7f1c-520d-ba17-879943ee2f00.
Jan 21 23:26:52 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr[94530]: [NOTICE] 020/232652 (2) : New worker #1 (4) forked
Jan 21 23:26:52 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 21 23:26:52 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev a64f461a-4fc9-4f81-8962-aa0dfbc650d5 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 21 23:26:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Jan 21 23:26:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:26:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:26:52 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:26:52 compute-0 ceph-mon[74318]: mds.? [v2:192.168.122.101:6804/2307437297,v1:192.168.122.101:6805/2307437297] up:standby
Jan 21 23:26:52 compute-0 ceph-mon[74318]: fsmap cephfs:1 {0=cephfs.compute-2.kghltm=up:active} 2 up:standby
Jan 21 23:26:52 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:52 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:52 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:26:52 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:26:52 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:26:52 compute-0 ceph-mon[74318]: osdmap e54: 3 total, 3 up, 3 in
Jan 21 23:26:52 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 21 23:26:52 compute-0 sudo[94189]: pam_unix(sudo:session): session closed for user root
Jan 21 23:26:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:26:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:26:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 21 23:26:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:52 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.umvtxm on compute-2
Jan 21 23:26:52 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.umvtxm on compute-2
Jan 21 23:26:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 21 23:26:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:26:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 21 23:26:53 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 21 23:26:53 compute-0 ceph-mgr[74614]: [progress INFO root] update: starting ev 08f592ef-9f61-4347-a6b1-f8500e5e65bc (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 21 23:26:53 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev 81638e6f-2558-41a7-976c-dadf7ce51f29 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 21 23:26:53 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event 81638e6f-2558-41a7-976c-dadf7ce51f29 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Jan 21 23:26:53 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev a80baf03-7b29-47c1-a9e1-cd31e5e53923 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 21 23:26:53 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event a80baf03-7b29-47c1-a9e1-cd31e5e53923 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Jan 21 23:26:53 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev 9b008376-5d93-4957-b67c-05ed7593938b (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 21 23:26:53 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event 9b008376-5d93-4957-b67c-05ed7593938b (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Jan 21 23:26:53 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev 7ad24ca0-0711-4d52-8c3f-2cb110688e7f (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 21 23:26:53 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event 7ad24ca0-0711-4d52-8c3f-2cb110688e7f (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 21 23:26:53 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev 3fd8417e-0b15-4b84-a0d8-16bebeffaab1 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 21 23:26:53 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event 3fd8417e-0b15-4b84-a0d8-16bebeffaab1 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 21 23:26:53 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev a64f461a-4fc9-4f81-8962-aa0dfbc650d5 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 21 23:26:53 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event a64f461a-4fc9-4f81-8962-aa0dfbc650d5 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 21 23:26:53 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev 08f592ef-9f61-4347-a6b1-f8500e5e65bc (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 21 23:26:53 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event 08f592ef-9f61-4347-a6b1-f8500e5e65bc (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 21 23:26:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=1.018026829s ======
Jan 21 23:26:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:26:52.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=1.018026829s
Jan 21 23:26:53 compute-0 ceph-mon[74318]: pgmap v143: 181 pgs: 46 unknown, 135 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 391 KiB/s rd, 8.0 KiB/s wr, 714 op/s
Jan 21 23:26:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:53 compute-0 ceph-mon[74318]: Deploying daemon haproxy.rgw.default.compute-2.umvtxm on compute-2
Jan 21 23:26:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 21 23:26:53 compute-0 ceph-mon[74318]: osdmap e55: 3 total, 3 up, 3 in
Jan 21 23:26:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v146: 243 pgs: 77 unknown, 166 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 21 23:26:54 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 21 23:26:54 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 21 23:26:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 21 23:26:54 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:26:54 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:26:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 21 23:26:54 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 21 23:26:55 compute-0 ceph-mgr[74614]: [progress INFO root] Writing back 18 completed events
Jan 21 23:26:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 21 23:26:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:26:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:26:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:26:55.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:26:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 21 23:26:55 compute-0 ceph-mon[74318]: pgmap v146: 243 pgs: 77 unknown, 166 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:55 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:26:55 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 23:26:55 compute-0 ceph-mon[74318]: osdmap e56: 3 total, 3 up, 3 in
Jan 21 23:26:55 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 21 23:26:55 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 56 pg[10.0( v 45'48 (0'0,45'48] local-lis/les=44/45 n=8 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=10.951359749s) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 45'47 mlcod 45'47 active pruub 118.078056335s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.0( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=10.951359749s) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 45'47 mlcod 0'0 unknown pruub 118.078056335s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.1( v 45'48 (0'0,45'48] local-lis/les=44/45 n=1 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.2( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.3( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.4( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.5( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.6( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.8( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.9( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.7( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.b( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.a( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.c( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.d( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.e( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.10( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.f( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.11( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.13( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.14( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.12( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.15( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.16( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.17( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.18( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.19( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.1a( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.1b( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.1d( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.1c( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.1e( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 57 pg[10.1f( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:26:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 62 unknown, 243 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 21 23:26:56 compute-0 ceph-mon[74318]: 4.11 scrub starts
Jan 21 23:26:56 compute-0 ceph-mon[74318]: 4.11 scrub ok
Jan 21 23:26:56 compute-0 ceph-mon[74318]: osdmap e57: 3 total, 3 up, 3 in
Jan 21 23:26:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 21 23:26:57 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.10( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.11( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.17( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.18( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.1b( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.13( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.e( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.12( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.1f( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.7( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.1e( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.9( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.1d( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.1c( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.6( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.1a( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.1( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.5( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.19( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.4( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.b( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.8( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.c( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.a( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.f( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.0( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 45'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.2( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.14( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.d( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.3( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.15( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 58 pg[10.16( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:26:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:26:57 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 21 23:26:57 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 21 23:26:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:26:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:26:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:26:57.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:26:58 compute-0 ceph-mon[74318]: 4.8 scrub starts
Jan 21 23:26:58 compute-0 ceph-mon[74318]: 4.8 scrub ok
Jan 21 23:26:58 compute-0 ceph-mon[74318]: pgmap v149: 305 pgs: 62 unknown, 243 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:58 compute-0 ceph-mon[74318]: osdmap e58: 3 total, 3 up, 3 in
Jan 21 23:26:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v151: 305 pgs: 62 unknown, 243 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:26:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:26:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:26:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:26:58.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:26:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:26:58 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:26:58 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 21 23:26:58 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0) v1
Jan 21 23:26:58 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:58 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 23:26:58 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 23:26:58 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 23:26:58 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 23:26:58 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 21 23:26:58 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.smcebf on compute-2
Jan 21 23:26:58 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.smcebf on compute-2
Jan 21 23:26:58 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 21 23:26:59 compute-0 ceph-mon[74318]: 4.d scrub starts
Jan 21 23:26:59 compute-0 ceph-mon[74318]: 4.d scrub ok
Jan 21 23:26:59 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:59 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:59 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:59 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:26:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:26:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:26:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:26:59.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:00 compute-0 ceph-mon[74318]: 4.12 scrub starts
Jan 21 23:27:00 compute-0 ceph-mon[74318]: 4.12 scrub ok
Jan 21 23:27:00 compute-0 ceph-mon[74318]: pgmap v151: 305 pgs: 62 unknown, 243 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:27:00 compute-0 ceph-mon[74318]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 23:27:00 compute-0 ceph-mon[74318]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 23:27:00 compute-0 ceph-mon[74318]: 4.c scrub starts
Jan 21 23:27:00 compute-0 ceph-mon[74318]: Deploying daemon keepalived.rgw.default.compute-2.smcebf on compute-2
Jan 21 23:27:00 compute-0 ceph-mon[74318]: 4.c scrub ok
Jan 21 23:27:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:27:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 21 23:27:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:27:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 21 23:27:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:27:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Jan 21 23:27:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 21 23:27:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 21 23:27:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:27:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Jan 21 23:27:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 21 23:27:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 21 23:27:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:27:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 21 23:27:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:27:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:00.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 21 23:27:01 compute-0 ceph-mon[74318]: 3.15 scrub starts
Jan 21 23:27:01 compute-0 ceph-mon[74318]: 3.15 scrub ok
Jan 21 23:27:01 compute-0 ceph-mon[74318]: pgmap v152: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:27:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:27:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:27:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 21 23:27:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:27:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 21 23:27:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:27:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:27:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:27:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:27:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 21 23:27:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:27:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 21 23:27:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:27:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:27:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 21 23:27:01 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[8.14( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[8.17( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[8.8( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[11.5( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[8.4( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[11.7( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[8.1b( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[8.18( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[11.1d( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[8.10( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[8.19( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[8.12( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.13( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.319306374s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 123.983070374s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.1e( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.133798599s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.797576904s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.13( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.319240570s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.983070374s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.1e( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.133721352s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.797576904s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.1d( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.133416176s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.797424316s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.11( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.315074921s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 123.979133606s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.1d( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.133364677s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.797424316s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.11( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.315038681s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.979133606s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.1b( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.318519592s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 123.983085632s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.18( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.318445206s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 123.983047485s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.16( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132821083s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.797462463s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.18( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.318395615s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.983047485s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.16( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132761002s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.797462463s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.1b( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.318386078s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.983085632s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.1( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.318099976s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 123.983078003s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.10( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.314132690s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 123.979148865s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.1( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.318056107s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.983078003s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.10( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.314072609s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.979148865s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.4( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132413864s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.797523499s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.3( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132854462s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.798133850s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.4( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132353783s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.797523499s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.a( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132216454s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.797538757s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.1f( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132237434s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.797569275s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.12( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.317958832s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 123.983329773s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.3( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132814407s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.798133850s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.12( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.317914009s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.983329773s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.a( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132171631s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.797538757s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.1f( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132215500s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.797569275s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.1e( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.317658424s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 123.983383179s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.1e( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.317629814s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.983383179s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.10( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132110596s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.797912598s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.13( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132098198s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.797889709s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.10( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132039070s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.797912598s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.13( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.131990433s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.797889709s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.11( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132018089s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.797958374s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.11( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.131989479s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.797958374s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.19( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.317459106s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 123.983581543s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.14( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.131733894s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.797912598s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.19( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.317413330s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.983581543s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.14( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.131709099s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.797912598s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.b( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.131646156s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.797966003s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.5( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.317203522s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 123.983566284s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.b( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.131615639s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.797966003s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.5( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.317174911s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.983566284s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.8( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.131519318s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.797958374s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.4( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.317070961s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 123.983604431s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.4( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.317047119s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.983604431s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.8( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.131435394s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.797958374s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.9( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.131334305s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.797981262s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.9( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.131300926s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.797981262s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.6( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.131427765s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.798118591s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.6( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.131389618s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.798118591s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.5( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.133393288s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.800239563s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.5( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.133358002s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.800239563s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.8( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.316727638s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 123.983627319s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.f( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.316669464s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 123.983711243s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.f( v 45'48 (0'0,45'48] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.316633224s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.983711243s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.8( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.316661835s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.983627319s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.2( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.133031845s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.800163269s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.2( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132989883s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.800163269s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.2( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.316514969s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 123.983757019s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.2( v 45'48 (0'0,45'48] local-lis/les=56/58 n=1 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.316491127s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.983757019s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.f( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132889748s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.800209045s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.3( v 58'51 (0'0,58'51] local-lis/les=56/58 n=1 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.316396713s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=58'49 lcod 58'50 mlcod 58'50 active pruub 123.983802795s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.e( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132757187s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.800193787s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.f( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132858276s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.800209045s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.3( v 58'51 (0'0,58'51] local-lis/les=56/58 n=1 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.316330910s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=58'49 lcod 58'50 mlcod 0'0 unknown NOTIFY pruub 123.983802795s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.e( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132738113s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.800193787s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.15( v 58'51 (0'0,58'51] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.316165924s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=58'49 lcod 58'50 mlcod 58'50 active pruub 123.983840942s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.18( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132526398s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.800224304s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.18( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132503510s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.800224304s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.15( v 58'51 (0'0,58'51] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.316100121s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=58'49 lcod 58'50 mlcod 0'0 unknown NOTIFY pruub 123.983840942s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.14( v 58'51 (0'0,58'51] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.316043854s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=58'49 lcod 58'50 mlcod 58'50 active pruub 123.983787537s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.1b( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132436752s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 126.800247192s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[10.14( v 58'51 (0'0,58'51] local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=11.315944672s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=58'49 lcod 58'50 mlcod 0'0 unknown NOTIFY pruub 123.983787537s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[5.1b( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[7.1b( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=14.132399559s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.800247192s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[5.1c( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[5.1f( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[5.9( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[5.15( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[5.2( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[5.f( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[5.7( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[5.18( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[5.1( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[5.10( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[5.11( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 59 pg[5.16( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:01.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:27:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v154: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:27:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Jan 21 23:27:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 21 23:27:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Jan 21 23:27:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 21 23:27:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:02.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 21 23:27:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:27:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:27:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 21 23:27:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:27:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 21 23:27:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:27:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:27:02 compute-0 ceph-mon[74318]: osdmap e59: 3 total, 3 up, 3 in
Jan 21 23:27:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 21 23:27:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 21 23:27:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 21 23:27:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 21 23:27:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 21 23:27:02 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 21 23:27:02 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[6.2( empty local-lis/les=0/0 n=0 ec=52/24 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[11.14( empty local-lis/les=59/60 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[6.6( empty local-lis/les=0/0 n=0 ec=52/24 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[6.a( empty local-lis/les=0/0 n=0 ec=52/24 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[6.e( empty local-lis/les=0/0 n=0 ec=52/24 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[8.17( v 41'4 (0'0,41'4] local-lis/les=59/60 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=41'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[5.1b( empty local-lis/les=59/60 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[11.1( empty local-lis/les=59/60 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[5.2( empty local-lis/les=59/60 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[8.8( v 41'4 lc 0'0 (0'0,41'4] local-lis/les=59/60 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=41'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[5.7( empty local-lis/les=59/60 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[11.5( empty local-lis/les=59/60 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[11.4( empty local-lis/les=59/60 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[8.14( v 41'4 (0'0,41'4] local-lis/les=59/60 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=41'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[8.4( v 41'4 (0'0,41'4] local-lis/les=59/60 n=1 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=41'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[11.7( empty local-lis/les=59/60 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[5.9( empty local-lis/les=59/60 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[5.16( empty local-lis/les=59/60 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[8.1b( v 41'4 (0'0,41'4] local-lis/les=59/60 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=41'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[5.15( empty local-lis/les=59/60 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[11.1b( empty local-lis/les=59/60 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[5.f( empty local-lis/les=59/60 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[8.18( v 41'4 (0'0,41'4] local-lis/les=59/60 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=41'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[11.1c( empty local-lis/les=59/60 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[11.1d( empty local-lis/les=59/60 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[5.11( empty local-lis/les=59/60 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[11.1e( empty local-lis/les=59/60 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[8.10( v 41'4 (0'0,41'4] local-lis/les=59/60 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=41'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[5.1( empty local-lis/les=59/60 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[11.f( empty local-lis/les=59/60 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[5.10( empty local-lis/les=59/60 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[11.1a( empty local-lis/les=59/60 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[5.1f( empty local-lis/les=59/60 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[8.19( v 41'4 (0'0,41'4] local-lis/les=59/60 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=41'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[11.12( empty local-lis/les=59/60 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[8.12( v 41'4 (0'0,41'4] local-lis/les=59/60 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=41'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[5.1c( empty local-lis/les=59/60 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 60 pg[5.18( empty local-lis/les=59/60 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:02 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 21 23:27:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 21 23:27:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:03.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 21 23:27:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 21 23:27:04 compute-0 ceph-mon[74318]: 3.e scrub starts
Jan 21 23:27:04 compute-0 ceph-mon[74318]: 3.e scrub ok
Jan 21 23:27:04 compute-0 ceph-mon[74318]: 4.16 scrub starts
Jan 21 23:27:04 compute-0 ceph-mon[74318]: 4.16 scrub ok
Jan 21 23:27:04 compute-0 ceph-mon[74318]: pgmap v154: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:27:04 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 21 23:27:04 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 21 23:27:04 compute-0 ceph-mon[74318]: osdmap e60: 3 total, 3 up, 3 in
Jan 21 23:27:04 compute-0 ceph-mon[74318]: 3.5 scrub starts
Jan 21 23:27:04 compute-0 ceph-mon[74318]: 3.5 scrub ok
Jan 21 23:27:04 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 21 23:27:04 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 21 23:27:04 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 61 pg[6.a( v 49'39 (0'0,49'39] local-lis/les=60/61 n=1 ec=52/24 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:04 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 61 pg[6.6( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=60/61 n=2 ec=52/24 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=49'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:04 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 61 pg[6.e( v 49'39 lc 46'17 (0'0,49'39] local-lis/les=60/61 n=1 ec=52/24 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:04 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 61 pg[6.2( v 49'39 (0'0,49'39] local-lis/les=60/61 n=2 ec=52/24 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v157: 305 pgs: 4 peering, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 145 B/s, 0 objects/s recovering
Jan 21 23:27:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:04.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:04 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 21 23:27:04 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 21 23:27:05 compute-0 ceph-mon[74318]: osdmap e61: 3 total, 3 up, 3 in
Jan 21 23:27:05 compute-0 ceph-mon[74318]: pgmap v157: 305 pgs: 4 peering, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 145 B/s, 0 objects/s recovering
Jan 21 23:27:05 compute-0 ceph-mon[74318]: 4.5 scrub starts
Jan 21 23:27:05 compute-0 ceph-mon[74318]: 4.5 scrub ok
Jan 21 23:27:05 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 21 23:27:05 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 21 23:27:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:27:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:05.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:27:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:27:05 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:27:05 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 21 23:27:05 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:05 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 23:27:05 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 23:27:05 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 23:27:05 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 23:27:05 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.ieqyao on compute-0
Jan 21 23:27:05 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.ieqyao on compute-0
Jan 21 23:27:06 compute-0 sudo[94545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:06 compute-0 sudo[94545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:06 compute-0 sudo[94545]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:06 compute-0 sudo[94570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:27:06 compute-0 sudo[94570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:06 compute-0 sudo[94570]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:06 compute-0 sudo[94595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:06 compute-0 sudo[94595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:06 compute-0 sudo[94595]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:06 compute-0 sudo[94620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:27:06 compute-0 sudo[94620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 4 peering, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 363 B/s, 1 keys/s, 2 objects/s recovering
Jan 21 23:27:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:27:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:06.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:27:07 compute-0 ceph-mon[74318]: 3.9 scrub starts
Jan 21 23:27:07 compute-0 ceph-mon[74318]: 3.9 scrub ok
Jan 21 23:27:07 compute-0 ceph-mon[74318]: 4.1b scrub starts
Jan 21 23:27:07 compute-0 ceph-mon[74318]: 4.1b scrub ok
Jan 21 23:27:07 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:07 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:07 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:07 compute-0 ceph-mon[74318]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 21 23:27:07 compute-0 ceph-mon[74318]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 21 23:27:07 compute-0 ceph-mon[74318]: Deploying daemon keepalived.rgw.default.compute-0.ieqyao on compute-0
Jan 21 23:27:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:27:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:07.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:08 compute-0 ceph-mon[74318]: pgmap v158: 305 pgs: 4 peering, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 363 B/s, 1 keys/s, 2 objects/s recovering
Jan 21 23:27:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v159: 305 pgs: 4 peering, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 320 B/s, 1 keys/s, 2 objects/s recovering
Jan 21 23:27:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:08.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:08 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 21 23:27:08 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 21 23:27:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:27:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:27:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:27:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:27:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:27:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:27:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:09.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v160: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 272 B/s, 1 keys/s, 2 objects/s recovering
Jan 21 23:27:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Jan 21 23:27:10 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 21 23:27:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Jan 21 23:27:10 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 21 23:27:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 21 23:27:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:10.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 21 23:27:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 21 23:27:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 21 23:27:11 compute-0 ceph-mon[74318]: 4.9 scrub starts
Jan 21 23:27:11 compute-0 ceph-mon[74318]: 4.9 scrub ok
Jan 21 23:27:11 compute-0 ceph-mon[74318]: 4.17 scrub starts
Jan 21 23:27:11 compute-0 ceph-mon[74318]: 4.17 scrub ok
Jan 21 23:27:11 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 21 23:27:11 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=52/24 lis/c=59/59 les/c/f=60/60/0 sis=62) [1] r=0 lpr=62 pi=[59,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:11 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=52/24 lis/c=59/59 les/c/f=60/60/0 sis=62) [1] r=0 lpr=62 pi=[59,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:11 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 62 pg[6.3( empty local-lis/les=0/0 n=0 ec=52/24 lis/c=59/59 les/c/f=60/61/0 sis=62) [1] r=0 lpr=62 pi=[59,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:11 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 62 pg[6.7( empty local-lis/les=0/0 n=0 ec=52/24 lis/c=59/59 les/c/f=60/61/0 sis=62) [1] r=0 lpr=62 pi=[59,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:11 compute-0 podman[94687]: 2026-01-21 23:27:11.858386149 +0000 UTC m=+5.225880035 container create 172d0a0004674d2e5875d91fe1f70d96891cbb5dd6307eb1efd7b01b8e52a725 (image=quay.io/ceph/keepalived:2.2.4, name=amazing_cerf, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, vcs-type=git, release=1793, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., description=keepalived for Ceph, build-date=2023-02-22T09:23:20, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 21 23:27:11 compute-0 podman[94687]: 2026-01-21 23:27:11.828788418 +0000 UTC m=+5.196282294 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 21 23:27:11 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 21 23:27:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 21 23:27:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:11.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 21 23:27:11 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 21 23:27:11 compute-0 systemd[1]: Started libpod-conmon-172d0a0004674d2e5875d91fe1f70d96891cbb5dd6307eb1efd7b01b8e52a725.scope.
Jan 21 23:27:11 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:11 compute-0 podman[94687]: 2026-01-21 23:27:11.95028146 +0000 UTC m=+5.317775346 container init 172d0a0004674d2e5875d91fe1f70d96891cbb5dd6307eb1efd7b01b8e52a725 (image=quay.io/ceph/keepalived:2.2.4, name=amazing_cerf, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, name=keepalived, io.buildah.version=1.28.2, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, architecture=x86_64, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 21 23:27:11 compute-0 podman[94687]: 2026-01-21 23:27:11.956664085 +0000 UTC m=+5.324157961 container start 172d0a0004674d2e5875d91fe1f70d96891cbb5dd6307eb1efd7b01b8e52a725 (image=quay.io/ceph/keepalived:2.2.4, name=amazing_cerf, release=1793, io.openshift.tags=Ceph keepalived, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.buildah.version=1.28.2, vcs-type=git, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4)
Jan 21 23:27:11 compute-0 podman[94687]: 2026-01-21 23:27:11.96144435 +0000 UTC m=+5.328938236 container attach 172d0a0004674d2e5875d91fe1f70d96891cbb5dd6307eb1efd7b01b8e52a725 (image=quay.io/ceph/keepalived:2.2.4, name=amazing_cerf, io.buildah.version=1.28.2, version=2.2.4, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 21 23:27:11 compute-0 amazing_cerf[94783]: 0 0
Jan 21 23:27:11 compute-0 systemd[1]: libpod-172d0a0004674d2e5875d91fe1f70d96891cbb5dd6307eb1efd7b01b8e52a725.scope: Deactivated successfully.
Jan 21 23:27:11 compute-0 podman[94687]: 2026-01-21 23:27:11.965963148 +0000 UTC m=+5.333457004 container died 172d0a0004674d2e5875d91fe1f70d96891cbb5dd6307eb1efd7b01b8e52a725 (image=quay.io/ceph/keepalived:2.2.4, name=amazing_cerf, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, release=1793, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.component=keepalived-container, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., description=keepalived for Ceph)
Jan 21 23:27:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c23a147bbf5868d32d2500407dc44d99fca77ebcb8f6476c2786ce4c6c65db4-merged.mount: Deactivated successfully.
Jan 21 23:27:12 compute-0 podman[94687]: 2026-01-21 23:27:12.019689796 +0000 UTC m=+5.387183662 container remove 172d0a0004674d2e5875d91fe1f70d96891cbb5dd6307eb1efd7b01b8e52a725 (image=quay.io/ceph/keepalived:2.2.4, name=amazing_cerf, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, version=2.2.4, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public)
Jan 21 23:27:12 compute-0 systemd[1]: libpod-conmon-172d0a0004674d2e5875d91fe1f70d96891cbb5dd6307eb1efd7b01b8e52a725.scope: Deactivated successfully.
Jan 21 23:27:12 compute-0 systemd[1]: Reloading.
Jan 21 23:27:12 compute-0 systemd-rc-local-generator[94829]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:27:12 compute-0 systemd-sysv-generator[94834]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:27:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:27:12 compute-0 systemd[1]: Reloading.
Jan 21 23:27:12 compute-0 systemd-rc-local-generator[94872]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:27:12 compute-0 systemd-sysv-generator[94875]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:27:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 1 active+clean+scrubbing+deep, 304 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 157 B/s, 1 keys/s, 2 objects/s recovering
Jan 21 23:27:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Jan 21 23:27:12 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 21 23:27:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Jan 21 23:27:12 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 21 23:27:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:12.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:12 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.ieqyao for 3759241a-7f1c-520d-ba17-879943ee2f00...
Jan 21 23:27:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 21 23:27:12 compute-0 ceph-mon[74318]: pgmap v159: 305 pgs: 4 peering, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 320 B/s, 1 keys/s, 2 objects/s recovering
Jan 21 23:27:12 compute-0 ceph-mon[74318]: 4.18 scrub starts
Jan 21 23:27:12 compute-0 ceph-mon[74318]: 4.18 scrub ok
Jan 21 23:27:12 compute-0 ceph-mon[74318]: 3.1d scrub starts
Jan 21 23:27:12 compute-0 ceph-mon[74318]: 3.1d scrub ok
Jan 21 23:27:12 compute-0 ceph-mon[74318]: 4.1 deep-scrub starts
Jan 21 23:27:12 compute-0 ceph-mon[74318]: pgmap v160: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 272 B/s, 1 keys/s, 2 objects/s recovering
Jan 21 23:27:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 21 23:27:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 21 23:27:12 compute-0 ceph-mon[74318]: 4.1e scrub starts
Jan 21 23:27:12 compute-0 ceph-mon[74318]: 4.1e scrub ok
Jan 21 23:27:12 compute-0 ceph-mon[74318]: 4.1 deep-scrub ok
Jan 21 23:27:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 21 23:27:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 21 23:27:12 compute-0 ceph-mon[74318]: osdmap e62: 3 total, 3 up, 3 in
Jan 21 23:27:12 compute-0 ceph-mon[74318]: 3.1c scrub starts
Jan 21 23:27:12 compute-0 ceph-mon[74318]: 3.1c scrub ok
Jan 21 23:27:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 21 23:27:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 21 23:27:12 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 21 23:27:12 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 21 23:27:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 21 23:27:12 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 21 23:27:12 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 63 pg[6.3( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=62/63 n=2 ec=52/24 lis/c=59/59 les/c/f=60/61/0 sis=62) [1] r=0 lpr=62 pi=[59,62)/1 crt=49'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:12 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 63 pg[6.f( v 49'39 lc 46'1 (0'0,49'39] local-lis/les=62/63 n=1 ec=52/24 lis/c=59/59 les/c/f=60/60/0 sis=62) [1] r=0 lpr=62 pi=[59,62)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:12 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 63 pg[6.7( v 49'39 lc 46'18 (0'0,49'39] local-lis/les=62/63 n=1 ec=52/24 lis/c=59/59 les/c/f=60/61/0 sis=62) [1] r=0 lpr=62 pi=[59,62)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:12 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 63 pg[6.b( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=62/63 n=1 ec=52/24 lis/c=59/59 les/c/f=60/60/0 sis=62) [1] r=0 lpr=62 pi=[59,62)/1 crt=49'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:12 compute-0 podman[94930]: 2026-01-21 23:27:12.970358053 +0000 UTC m=+0.068844782 container create 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2023-02-22T09:23:20, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, description=keepalived for Ceph, io.buildah.version=1.28.2, release=1793, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived)
Jan 21 23:27:13 compute-0 podman[94930]: 2026-01-21 23:27:12.939740676 +0000 UTC m=+0.038227455 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 21 23:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56d74478755b380f113aa6d2c4c964acef55d956cb117ccca23a3a8789a1f386/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:13 compute-0 podman[94930]: 2026-01-21 23:27:13.072513101 +0000 UTC m=+0.170999880 container init 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, version=2.2.4, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, name=keepalived)
Jan 21 23:27:13 compute-0 podman[94930]: 2026-01-21 23:27:13.078721793 +0000 UTC m=+0.177208522 container start 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, vcs-type=git, version=2.2.4, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Jan 21 23:27:13 compute-0 bash[94930]: 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be
Jan 21 23:27:13 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.ieqyao for 3759241a-7f1c-520d-ba17-879943ee2f00.
Jan 21 23:27:13 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao[94946]: Wed Jan 21 23:27:13 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 21 23:27:13 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao[94946]: Wed Jan 21 23:27:13 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 21 23:27:13 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao[94946]: Wed Jan 21 23:27:13 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 21 23:27:13 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao[94946]: Wed Jan 21 23:27:13 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 21 23:27:13 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao[94946]: Wed Jan 21 23:27:13 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 21 23:27:13 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao[94946]: Wed Jan 21 23:27:13 2026: Starting VRRP child process, pid=4
Jan 21 23:27:13 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao[94946]: Wed Jan 21 23:27:13 2026: Startup complete
Jan 21 23:27:13 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao[94946]: Wed Jan 21 23:27:13 2026: (VI_0) Entering BACKUP STATE (init)
Jan 21 23:27:13 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao[94946]: Wed Jan 21 23:27:13 2026: VRRP_Script(check_backend) succeeded
Jan 21 23:27:13 compute-0 sudo[94620]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:27:13 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:27:13 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 21 23:27:13 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:13 compute-0 ceph-mgr[74614]: [progress INFO root] complete: finished ev 1c0a765a-6496-4aaf-b0e6-a98a360adffe (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 21 23:27:13 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event 1c0a765a-6496-4aaf-b0e6-a98a360adffe (Updating ingress.rgw.default deployment (+4 -> 4)) in 25 seconds
Jan 21 23:27:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 21 23:27:13 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:13 compute-0 sudo[94955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:13 compute-0 sudo[94954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:13 compute-0 sudo[94954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:13 compute-0 sudo[94955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:13 compute-0 sudo[94954]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:13 compute-0 sudo[94955]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:13 compute-0 sudo[95004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:27:13 compute-0 sudo[95005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:13 compute-0 sudo[95004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:13 compute-0 sudo[95005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:13 compute-0 sudo[95004]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:13 compute-0 sudo[95005]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 21 23:27:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 21 23:27:13 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 21 23:27:13 compute-0 ceph-mon[74318]: pgmap v162: 305 pgs: 1 active+clean+scrubbing+deep, 304 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 157 B/s, 1 keys/s, 2 objects/s recovering
Jan 21 23:27:13 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 21 23:27:13 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 21 23:27:13 compute-0 ceph-mon[74318]: osdmap e63: 3 total, 3 up, 3 in
Jan 21 23:27:13 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:13 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:13 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:13 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:13 compute-0 sudo[95054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:13 compute-0 sudo[95054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:13 compute-0 sudo[95054]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:27:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:13.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:27:13 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 21 23:27:13 compute-0 sudo[95079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:27:13 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 21 23:27:13 compute-0 sudo[95079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:13 compute-0 sudo[95079]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:14 compute-0 sudo[95104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:14 compute-0 sudo[95104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:14 compute-0 sudo[95104]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:14 compute-0 sudo[95129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 21 23:27:14 compute-0 sudo[95129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v165: 305 pgs: 1 active+clean+scrubbing+deep, 304 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 106 B/s, 1 keys/s, 1 objects/s recovering
Jan 21 23:27:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Jan 21 23:27:14 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 21 23:27:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Jan 21 23:27:14 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 21 23:27:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 21 23:27:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:14.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 21 23:27:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 21 23:27:14 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 21 23:27:14 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 21 23:27:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 21 23:27:14 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 21 23:27:14 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 65 pg[6.5( empty local-lis/les=0/0 n=0 ec=52/24 lis/c=59/59 les/c/f=60/61/0 sis=65) [1] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:14 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 65 pg[6.d( empty local-lis/les=0/0 n=0 ec=52/24 lis/c=59/59 les/c/f=60/61/0 sis=65) [1] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:14 compute-0 ceph-mon[74318]: 2.1e scrub starts
Jan 21 23:27:14 compute-0 ceph-mon[74318]: 2.1e scrub ok
Jan 21 23:27:14 compute-0 ceph-mon[74318]: 4.15 deep-scrub starts
Jan 21 23:27:14 compute-0 ceph-mon[74318]: 4.15 deep-scrub ok
Jan 21 23:27:14 compute-0 ceph-mon[74318]: osdmap e64: 3 total, 3 up, 3 in
Jan 21 23:27:14 compute-0 ceph-mon[74318]: 4.1a scrub starts
Jan 21 23:27:14 compute-0 ceph-mon[74318]: 4.1a scrub ok
Jan 21 23:27:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 21 23:27:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 21 23:27:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 21 23:27:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 21 23:27:14 compute-0 ceph-mon[74318]: osdmap e65: 3 total, 3 up, 3 in
Jan 21 23:27:14 compute-0 podman[95226]: 2026-01-21 23:27:14.796958983 +0000 UTC m=+0.101707628 container exec 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:27:14 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Jan 21 23:27:14 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Jan 21 23:27:14 compute-0 podman[95226]: 2026-01-21 23:27:14.909735048 +0000 UTC m=+0.214483683 container exec_died 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 21 23:27:15 compute-0 ceph-mgr[74614]: [progress INFO root] Writing back 19 completed events
Jan 21 23:27:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 21 23:27:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:15 compute-0 ceph-mgr[74614]: [progress INFO root] Completed event 2e4db74e-df5e-48a7-a1ee-a12bbe52d9e8 (Global Recovery Event) in 35 seconds
Jan 21 23:27:15 compute-0 podman[95383]: 2026-01-21 23:27:15.628650235 +0000 UTC m=+0.046260025 container exec fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 21 23:27:15 compute-0 podman[95383]: 2026-01-21 23:27:15.641853558 +0000 UTC m=+0.059463328 container exec_died fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 21 23:27:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 21 23:27:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 21 23:27:15 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 21 23:27:15 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 66 pg[6.5( v 49'39 lc 46'10 (0'0,49'39] local-lis/les=65/66 n=2 ec=52/24 lis/c=59/59 les/c/f=60/61/0 sis=65) [1] r=0 lpr=65 pi=[59,65)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:15 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 66 pg[6.d( v 49'39 lc 46'13 (0'0,49'39] local-lis/les=65/66 n=1 ec=52/24 lis/c=59/59 les/c/f=60/61/0 sis=65) [1] r=0 lpr=65 pi=[59,65)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:15 compute-0 ceph-mon[74318]: pgmap v165: 305 pgs: 1 active+clean+scrubbing+deep, 304 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 106 B/s, 1 keys/s, 1 objects/s recovering
Jan 21 23:27:15 compute-0 ceph-mon[74318]: 2.1c scrub starts
Jan 21 23:27:15 compute-0 ceph-mon[74318]: 2.1c scrub ok
Jan 21 23:27:15 compute-0 ceph-mon[74318]: 4.e deep-scrub starts
Jan 21 23:27:15 compute-0 ceph-mon[74318]: 4.e deep-scrub ok
Jan 21 23:27:15 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:15 compute-0 ceph-mon[74318]: osdmap e66: 3 total, 3 up, 3 in
Jan 21 23:27:15 compute-0 podman[95447]: 2026-01-21 23:27:15.882343376 +0000 UTC m=+0.057098986 container exec 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, build-date=2023-02-22T09:23:20, distribution-scope=public, com.redhat.component=keepalived-container, release=1793, description=keepalived for Ceph, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 21 23:27:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:27:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:15.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:27:15 compute-0 podman[95447]: 2026-01-21 23:27:15.898919347 +0000 UTC m=+0.073674907 container exec_died 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived)
Jan 21 23:27:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:27:15 compute-0 sudo[95129]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:27:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:27:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:27:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:27:16 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:27:16 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:27:16 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:27:16 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:27:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:27:16 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:16 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 4ed37ce5-08bc-433e-a70b-8ebf84e1368e does not exist
Jan 21 23:27:16 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 87e40342-83c2-419c-932f-fa4140e03fb2 does not exist
Jan 21 23:27:16 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev a079e60a-f1f2-4037-872d-91f053797170 does not exist
Jan 21 23:27:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:27:16 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:27:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:27:16 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:27:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:27:16 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:16 compute-0 sudo[95479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:16 compute-0 sudo[95479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:16 compute-0 sudo[95479]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:16 compute-0 sudo[95504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:27:16 compute-0 sudo[95504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:16 compute-0 sudo[95504]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:16 compute-0 sudo[95529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:16 compute-0 sudo[95529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:16 compute-0 sudo[95529]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 1 active+recovering+remapped, 7 active+recovery_wait+remapped, 1 active+clean+scrubbing+deep, 296 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 35/213 objects misplaced (16.432%); 186 B/s, 2 keys/s, 2 objects/s recovering
Jan 21 23:27:16 compute-0 sudo[95554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:27:16 compute-0 sudo[95554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:16.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:16 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao[94946]: Wed Jan 21 23:27:16 2026: (VI_0) Entering MASTER STATE
Jan 21 23:27:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 21 23:27:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 21 23:27:16 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 21 23:27:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:27:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:27:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:27:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:16 compute-0 ceph-mon[74318]: osdmap e67: 3 total, 3 up, 3 in
Jan 21 23:27:16 compute-0 podman[95619]: 2026-01-21 23:27:16.988359966 +0000 UTC m=+0.069594742 container create 4c03fc881cba23271776f88a8f14c2fe61014a08cc37ed2320f63c1939ea5b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 23:27:17 compute-0 systemd[1]: Started libpod-conmon-4c03fc881cba23271776f88a8f14c2fe61014a08cc37ed2320f63c1939ea5b0e.scope.
Jan 21 23:27:17 compute-0 podman[95619]: 2026-01-21 23:27:16.958100658 +0000 UTC m=+0.039335504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:27:17 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:17 compute-0 podman[95619]: 2026-01-21 23:27:17.085301729 +0000 UTC m=+0.166536595 container init 4c03fc881cba23271776f88a8f14c2fe61014a08cc37ed2320f63c1939ea5b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mayer, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 21 23:27:17 compute-0 podman[95619]: 2026-01-21 23:27:17.095282068 +0000 UTC m=+0.176516874 container start 4c03fc881cba23271776f88a8f14c2fe61014a08cc37ed2320f63c1939ea5b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mayer, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:27:17 compute-0 podman[95619]: 2026-01-21 23:27:17.099681662 +0000 UTC m=+0.180916528 container attach 4c03fc881cba23271776f88a8f14c2fe61014a08cc37ed2320f63c1939ea5b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 21 23:27:17 compute-0 cranky_mayer[95636]: 167 167
Jan 21 23:27:17 compute-0 systemd[1]: libpod-4c03fc881cba23271776f88a8f14c2fe61014a08cc37ed2320f63c1939ea5b0e.scope: Deactivated successfully.
Jan 21 23:27:17 compute-0 podman[95619]: 2026-01-21 23:27:17.102248089 +0000 UTC m=+0.183482885 container died 4c03fc881cba23271776f88a8f14c2fe61014a08cc37ed2320f63c1939ea5b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mayer, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 21 23:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e0e064f3dbcfafaf50fdcefa98375337cd39c3a37374b24891aa9909297cddd-merged.mount: Deactivated successfully.
Jan 21 23:27:17 compute-0 podman[95619]: 2026-01-21 23:27:17.168832432 +0000 UTC m=+0.250067228 container remove 4c03fc881cba23271776f88a8f14c2fe61014a08cc37ed2320f63c1939ea5b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 21 23:27:17 compute-0 systemd[1]: libpod-conmon-4c03fc881cba23271776f88a8f14c2fe61014a08cc37ed2320f63c1939ea5b0e.scope: Deactivated successfully.
Jan 21 23:27:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:27:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 21 23:27:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 21 23:27:17 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 21 23:27:17 compute-0 podman[95661]: 2026-01-21 23:27:17.416745983 +0000 UTC m=+0.068815242 container create bffec77b24dff80bd253218f184760308df0d0b5fc3205343624adbf6617e4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:27:17 compute-0 systemd[1]: Started libpod-conmon-bffec77b24dff80bd253218f184760308df0d0b5fc3205343624adbf6617e4ba.scope.
Jan 21 23:27:17 compute-0 podman[95661]: 2026-01-21 23:27:17.386791364 +0000 UTC m=+0.038860673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:27:17 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d08e5eb92dce59c9599fb9b6333833c0e3f165815e2ce69cf6f86e139e3c2ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d08e5eb92dce59c9599fb9b6333833c0e3f165815e2ce69cf6f86e139e3c2ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d08e5eb92dce59c9599fb9b6333833c0e3f165815e2ce69cf6f86e139e3c2ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d08e5eb92dce59c9599fb9b6333833c0e3f165815e2ce69cf6f86e139e3c2ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d08e5eb92dce59c9599fb9b6333833c0e3f165815e2ce69cf6f86e139e3c2ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:17 compute-0 podman[95661]: 2026-01-21 23:27:17.549950239 +0000 UTC m=+0.202019538 container init bffec77b24dff80bd253218f184760308df0d0b5fc3205343624adbf6617e4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 23:27:17 compute-0 podman[95661]: 2026-01-21 23:27:17.561495159 +0000 UTC m=+0.213564418 container start bffec77b24dff80bd253218f184760308df0d0b5fc3205343624adbf6617e4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 21 23:27:17 compute-0 podman[95661]: 2026-01-21 23:27:17.565620027 +0000 UTC m=+0.217689276 container attach bffec77b24dff80bd253218f184760308df0d0b5fc3205343624adbf6617e4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 21 23:27:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:17.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:18 compute-0 ceph-mon[74318]: pgmap v168: 305 pgs: 1 active+recovering+remapped, 7 active+recovery_wait+remapped, 1 active+clean+scrubbing+deep, 296 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 35/213 objects misplaced (16.432%); 186 B/s, 2 keys/s, 2 objects/s recovering
Jan 21 23:27:18 compute-0 ceph-mon[74318]: osdmap e68: 3 total, 3 up, 3 in
Jan 21 23:27:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 21 23:27:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 21 23:27:18 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 21 23:27:18 compute-0 confident_goldwasser[95678]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:27:18 compute-0 confident_goldwasser[95678]: --> relative data size: 1.0
Jan 21 23:27:18 compute-0 confident_goldwasser[95678]: --> All data devices are unavailable
Jan 21 23:27:18 compute-0 systemd[1]: libpod-bffec77b24dff80bd253218f184760308df0d0b5fc3205343624adbf6617e4ba.scope: Deactivated successfully.
Jan 21 23:27:18 compute-0 podman[95693]: 2026-01-21 23:27:18.495316969 +0000 UTC m=+0.041876021 container died bffec77b24dff80bd253218f184760308df0d0b5fc3205343624adbf6617e4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:27:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 8 peering, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 321 B/s, 10 objects/s recovering
Jan 21 23:27:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d08e5eb92dce59c9599fb9b6333833c0e3f165815e2ce69cf6f86e139e3c2ff-merged.mount: Deactivated successfully.
Jan 21 23:27:18 compute-0 podman[95693]: 2026-01-21 23:27:18.576034059 +0000 UTC m=+0.122593021 container remove bffec77b24dff80bd253218f184760308df0d0b5fc3205343624adbf6617e4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:27:18 compute-0 systemd[1]: libpod-conmon-bffec77b24dff80bd253218f184760308df0d0b5fc3205343624adbf6617e4ba.scope: Deactivated successfully.
Jan 21 23:27:18 compute-0 sudo[95554]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:18.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:18 compute-0 sudo[95708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:18 compute-0 sudo[95708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:18 compute-0 sudo[95708]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:18 compute-0 sudo[95733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:27:18 compute-0 sudo[95733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:18 compute-0 sudo[95733]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:18 compute-0 sudo[95758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:18 compute-0 sudo[95758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:18 compute-0 sudo[95758]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:18 compute-0 sudo[95783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:27:18 compute-0 sudo[95783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:19 compute-0 ceph-mon[74318]: osdmap e69: 3 total, 3 up, 3 in
Jan 21 23:27:19 compute-0 ceph-mon[74318]: pgmap v172: 305 pgs: 8 peering, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 321 B/s, 10 objects/s recovering
Jan 21 23:27:19 compute-0 podman[95849]: 2026-01-21 23:27:19.30471818 +0000 UTC m=+0.072194739 container create 975b451c68bb67368d0d39606993d3a6f074471032e9cb933736c4272689c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 21 23:27:19 compute-0 systemd[1]: Started libpod-conmon-975b451c68bb67368d0d39606993d3a6f074471032e9cb933736c4272689c47c.scope.
Jan 21 23:27:19 compute-0 podman[95849]: 2026-01-21 23:27:19.276799224 +0000 UTC m=+0.044275843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:27:19 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:19 compute-0 podman[95849]: 2026-01-21 23:27:19.394276031 +0000 UTC m=+0.161752640 container init 975b451c68bb67368d0d39606993d3a6f074471032e9cb933736c4272689c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:27:19 compute-0 podman[95849]: 2026-01-21 23:27:19.403674264 +0000 UTC m=+0.171150833 container start 975b451c68bb67368d0d39606993d3a6f074471032e9cb933736c4272689c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 23:27:19 compute-0 podman[95849]: 2026-01-21 23:27:19.407791812 +0000 UTC m=+0.175268381 container attach 975b451c68bb67368d0d39606993d3a6f074471032e9cb933736c4272689c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 23:27:19 compute-0 brave_bell[95865]: 167 167
Jan 21 23:27:19 compute-0 systemd[1]: libpod-975b451c68bb67368d0d39606993d3a6f074471032e9cb933736c4272689c47c.scope: Deactivated successfully.
Jan 21 23:27:19 compute-0 podman[95849]: 2026-01-21 23:27:19.411653033 +0000 UTC m=+0.179129592 container died 975b451c68bb67368d0d39606993d3a6f074471032e9cb933736c4272689c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 23:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3893292e502b1691adfd2d2178f95d83d1ce1b360f8517a3b3dd2f9b48e9742f-merged.mount: Deactivated successfully.
Jan 21 23:27:19 compute-0 podman[95849]: 2026-01-21 23:27:19.46727946 +0000 UTC m=+0.234756019 container remove 975b451c68bb67368d0d39606993d3a6f074471032e9cb933736c4272689c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:27:19 compute-0 systemd[1]: libpod-conmon-975b451c68bb67368d0d39606993d3a6f074471032e9cb933736c4272689c47c.scope: Deactivated successfully.
Jan 21 23:27:19 compute-0 podman[95889]: 2026-01-21 23:27:19.702139291 +0000 UTC m=+0.076724407 container create d816ba26415b6d539481323a47c0a36dae2fa24570daca81a1f533fbe4d3caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rubin, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:27:19 compute-0 systemd[1]: Started libpod-conmon-d816ba26415b6d539481323a47c0a36dae2fa24570daca81a1f533fbe4d3caa2.scope.
Jan 21 23:27:19 compute-0 podman[95889]: 2026-01-21 23:27:19.664757018 +0000 UTC m=+0.039342114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:27:19 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b44fbdcaaa6629c00a0f989b68b33ec960ade5b2f62c2d79e1abf0a3fe44e8d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b44fbdcaaa6629c00a0f989b68b33ec960ade5b2f62c2d79e1abf0a3fe44e8d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b44fbdcaaa6629c00a0f989b68b33ec960ade5b2f62c2d79e1abf0a3fe44e8d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b44fbdcaaa6629c00a0f989b68b33ec960ade5b2f62c2d79e1abf0a3fe44e8d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:19 compute-0 podman[95889]: 2026-01-21 23:27:19.79279552 +0000 UTC m=+0.167380646 container init d816ba26415b6d539481323a47c0a36dae2fa24570daca81a1f533fbe4d3caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 21 23:27:19 compute-0 podman[95889]: 2026-01-21 23:27:19.80352289 +0000 UTC m=+0.178108006 container start d816ba26415b6d539481323a47c0a36dae2fa24570daca81a1f533fbe4d3caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rubin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 21 23:27:19 compute-0 podman[95889]: 2026-01-21 23:27:19.808366316 +0000 UTC m=+0.182951492 container attach d816ba26415b6d539481323a47c0a36dae2fa24570daca81a1f533fbe4d3caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rubin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:27:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:27:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:19.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:27:20 compute-0 ceph-mon[74318]: 2.1d deep-scrub starts
Jan 21 23:27:20 compute-0 ceph-mon[74318]: 2.1d deep-scrub ok
Jan 21 23:27:20 compute-0 ceph-mgr[74614]: [progress INFO root] Writing back 20 completed events
Jan 21 23:27:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 21 23:27:20 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 8 peering, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 265 B/s, 8 objects/s recovering
Jan 21 23:27:20 compute-0 happy_rubin[95906]: {
Jan 21 23:27:20 compute-0 happy_rubin[95906]:     "1": [
Jan 21 23:27:20 compute-0 happy_rubin[95906]:         {
Jan 21 23:27:20 compute-0 happy_rubin[95906]:             "devices": [
Jan 21 23:27:20 compute-0 happy_rubin[95906]:                 "/dev/loop3"
Jan 21 23:27:20 compute-0 happy_rubin[95906]:             ],
Jan 21 23:27:20 compute-0 happy_rubin[95906]:             "lv_name": "ceph_lv0",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:             "lv_size": "7511998464",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:             "name": "ceph_lv0",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:             "tags": {
Jan 21 23:27:20 compute-0 happy_rubin[95906]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:                 "ceph.cluster_name": "ceph",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:                 "ceph.crush_device_class": "",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:                 "ceph.encrypted": "0",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:                 "ceph.osd_id": "1",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:                 "ceph.type": "block",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:                 "ceph.vdo": "0"
Jan 21 23:27:20 compute-0 happy_rubin[95906]:             },
Jan 21 23:27:20 compute-0 happy_rubin[95906]:             "type": "block",
Jan 21 23:27:20 compute-0 happy_rubin[95906]:             "vg_name": "ceph_vg0"
Jan 21 23:27:20 compute-0 happy_rubin[95906]:         }
Jan 21 23:27:20 compute-0 happy_rubin[95906]:     ]
Jan 21 23:27:20 compute-0 happy_rubin[95906]: }
Jan 21 23:27:20 compute-0 systemd[1]: libpod-d816ba26415b6d539481323a47c0a36dae2fa24570daca81a1f533fbe4d3caa2.scope: Deactivated successfully.
Jan 21 23:27:20 compute-0 podman[95889]: 2026-01-21 23:27:20.562438878 +0000 UTC m=+0.937023994 container died d816ba26415b6d539481323a47c0a36dae2fa24570daca81a1f533fbe4d3caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rubin, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 23:27:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b44fbdcaaa6629c00a0f989b68b33ec960ade5b2f62c2d79e1abf0a3fe44e8d3-merged.mount: Deactivated successfully.
Jan 21 23:27:20 compute-0 podman[95889]: 2026-01-21 23:27:20.632852269 +0000 UTC m=+1.007437355 container remove d816ba26415b6d539481323a47c0a36dae2fa24570daca81a1f533fbe4d3caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_rubin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 21 23:27:20 compute-0 systemd[1]: libpod-conmon-d816ba26415b6d539481323a47c0a36dae2fa24570daca81a1f533fbe4d3caa2.scope: Deactivated successfully.
Jan 21 23:27:20 compute-0 sudo[95783]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:20.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:20 compute-0 sudo[95927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:20 compute-0 sudo[95927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:20 compute-0 sudo[95927]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:20 compute-0 sudo[95952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:27:20 compute-0 sudo[95952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:20 compute-0 sudo[95952]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:20 compute-0 sudo[95977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:20 compute-0 sudo[95977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:20 compute-0 sudo[95977]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:20 compute-0 sudo[96002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:27:20 compute-0 sudo[96002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:21 compute-0 podman[96069]: 2026-01-21 23:27:21.334653611 +0000 UTC m=+0.064478618 container create e6bb8fadc4e2a6002c19bcd99c4c8870a598d143b90a52dbae60943c2314f37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 23:27:21 compute-0 systemd[1]: Started libpod-conmon-e6bb8fadc4e2a6002c19bcd99c4c8870a598d143b90a52dbae60943c2314f37b.scope.
Jan 21 23:27:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:21 compute-0 ceph-mon[74318]: pgmap v173: 305 pgs: 8 peering, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 265 B/s, 8 objects/s recovering
Jan 21 23:27:21 compute-0 podman[96069]: 2026-01-21 23:27:21.302515465 +0000 UTC m=+0.032340542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:27:21 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:21 compute-0 podman[96069]: 2026-01-21 23:27:21.418702978 +0000 UTC m=+0.148527995 container init e6bb8fadc4e2a6002c19bcd99c4c8870a598d143b90a52dbae60943c2314f37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:27:21 compute-0 podman[96069]: 2026-01-21 23:27:21.433024621 +0000 UTC m=+0.162849618 container start e6bb8fadc4e2a6002c19bcd99c4c8870a598d143b90a52dbae60943c2314f37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 21 23:27:21 compute-0 podman[96069]: 2026-01-21 23:27:21.437802955 +0000 UTC m=+0.167627952 container attach e6bb8fadc4e2a6002c19bcd99c4c8870a598d143b90a52dbae60943c2314f37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 23:27:21 compute-0 boring_austin[96085]: 167 167
Jan 21 23:27:21 compute-0 systemd[1]: libpod-e6bb8fadc4e2a6002c19bcd99c4c8870a598d143b90a52dbae60943c2314f37b.scope: Deactivated successfully.
Jan 21 23:27:21 compute-0 podman[96069]: 2026-01-21 23:27:21.443119324 +0000 UTC m=+0.172944421 container died e6bb8fadc4e2a6002c19bcd99c4c8870a598d143b90a52dbae60943c2314f37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 23:27:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d625e7be36504bc53fad96ae46f496767767c779a803880f8431a90f33e9f22-merged.mount: Deactivated successfully.
Jan 21 23:27:21 compute-0 podman[96069]: 2026-01-21 23:27:21.494599233 +0000 UTC m=+0.224424230 container remove e6bb8fadc4e2a6002c19bcd99c4c8870a598d143b90a52dbae60943c2314f37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 21 23:27:21 compute-0 systemd[1]: libpod-conmon-e6bb8fadc4e2a6002c19bcd99c4c8870a598d143b90a52dbae60943c2314f37b.scope: Deactivated successfully.
Jan 21 23:27:21 compute-0 podman[96108]: 2026-01-21 23:27:21.663012445 +0000 UTC m=+0.055309980 container create 853ca7dcbe9d37647c96c584bd22f4808a30683bc7978659e040e020db8a54c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:27:21 compute-0 systemd[1]: Started libpod-conmon-853ca7dcbe9d37647c96c584bd22f4808a30683bc7978659e040e020db8a54c6.scope.
Jan 21 23:27:21 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:21 compute-0 podman[96108]: 2026-01-21 23:27:21.636548517 +0000 UTC m=+0.028846122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:27:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27583c7de7eeab598f8c17a640d0376ae2cf94a27d53ea2577a17945ada0c6f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27583c7de7eeab598f8c17a640d0376ae2cf94a27d53ea2577a17945ada0c6f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27583c7de7eeab598f8c17a640d0376ae2cf94a27d53ea2577a17945ada0c6f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27583c7de7eeab598f8c17a640d0376ae2cf94a27d53ea2577a17945ada0c6f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:21 compute-0 podman[96108]: 2026-01-21 23:27:21.744621309 +0000 UTC m=+0.136918864 container init 853ca7dcbe9d37647c96c584bd22f4808a30683bc7978659e040e020db8a54c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:27:21 compute-0 podman[96108]: 2026-01-21 23:27:21.75503546 +0000 UTC m=+0.147332985 container start 853ca7dcbe9d37647c96c584bd22f4808a30683bc7978659e040e020db8a54c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_aryabhata, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 21 23:27:21 compute-0 podman[96108]: 2026-01-21 23:27:21.759246869 +0000 UTC m=+0.151544434 container attach 853ca7dcbe9d37647c96c584bd22f4808a30683bc7978659e040e020db8a54c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_aryabhata, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 21 23:27:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:21.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:27:22 compute-0 ceph-mon[74318]: 3.11 scrub starts
Jan 21 23:27:22 compute-0 ceph-mon[74318]: 3.11 scrub ok
Jan 21 23:27:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 308 B/s, 10 objects/s recovering
Jan 21 23:27:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Jan 21 23:27:22 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 21 23:27:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Jan 21 23:27:22 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 21 23:27:22 compute-0 optimistic_aryabhata[96124]: {
Jan 21 23:27:22 compute-0 optimistic_aryabhata[96124]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:27:22 compute-0 optimistic_aryabhata[96124]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:27:22 compute-0 optimistic_aryabhata[96124]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:27:22 compute-0 optimistic_aryabhata[96124]:         "osd_id": 1,
Jan 21 23:27:22 compute-0 optimistic_aryabhata[96124]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:27:22 compute-0 optimistic_aryabhata[96124]:         "type": "bluestore"
Jan 21 23:27:22 compute-0 optimistic_aryabhata[96124]:     }
Jan 21 23:27:22 compute-0 optimistic_aryabhata[96124]: }
Jan 21 23:27:22 compute-0 systemd[1]: libpod-853ca7dcbe9d37647c96c584bd22f4808a30683bc7978659e040e020db8a54c6.scope: Deactivated successfully.
Jan 21 23:27:22 compute-0 podman[96108]: 2026-01-21 23:27:22.657081942 +0000 UTC m=+1.049379467 container died 853ca7dcbe9d37647c96c584bd22f4808a30683bc7978659e040e020db8a54c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_aryabhata, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 21 23:27:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:22.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-27583c7de7eeab598f8c17a640d0376ae2cf94a27d53ea2577a17945ada0c6f1-merged.mount: Deactivated successfully.
Jan 21 23:27:22 compute-0 podman[96108]: 2026-01-21 23:27:22.717934517 +0000 UTC m=+1.110232042 container remove 853ca7dcbe9d37647c96c584bd22f4808a30683bc7978659e040e020db8a54c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_aryabhata, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:27:22 compute-0 systemd[1]: libpod-conmon-853ca7dcbe9d37647c96c584bd22f4808a30683bc7978659e040e020db8a54c6.scope: Deactivated successfully.
Jan 21 23:27:22 compute-0 sudo[96002]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:27:22 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:27:22 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:22 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 003d3ff7-28a9-435e-97a1-c7e5d34f5825 does not exist
Jan 21 23:27:22 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 39c41a02-9f24-472f-ac27-cf88d87ba0a6 does not exist
Jan 21 23:27:22 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev a51c4103-9c54-4e43-a3bf-887c07634e08 does not exist
Jan 21 23:27:22 compute-0 sudo[96156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:22 compute-0 sudo[96156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:22 compute-0 sudo[96156]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:22 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 21 23:27:22 compute-0 sudo[96181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:27:22 compute-0 sudo[96181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:22 compute-0 sudo[96181]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:22 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 21 23:27:23 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Jan 21 23:27:23 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Jan 21 23:27:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 21 23:27:23 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 23:27:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 21 23:27:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 21 23:27:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:27:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:23 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 23:27:23 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 23:27:23 compute-0 sudo[96207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:23 compute-0 sudo[96207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:23 compute-0 sudo[96207]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:23 compute-0 sudo[96232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:27:23 compute-0 sudo[96232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:23 compute-0 sudo[96232]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:23 compute-0 sudo[96257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:23 compute-0 sudo[96257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:23 compute-0 sudo[96257]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 21 23:27:23 compute-0 ceph-mon[74318]: pgmap v174: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 308 B/s, 10 objects/s recovering
Jan 21 23:27:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 21 23:27:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 21 23:27:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:23 compute-0 ceph-mon[74318]: 3.10 scrub starts
Jan 21 23:27:23 compute-0 ceph-mon[74318]: 3.10 scrub ok
Jan 21 23:27:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 23:27:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 21 23:27:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:23 compute-0 sudo[96282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:27:23 compute-0 sudo[96282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:23 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 21 23:27:23 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 21 23:27:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 21 23:27:23 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 21 23:27:23 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 70 pg[6.e( v 49'39 (0'0,49'39] local-lis/les=60/61 n=1 ec=52/24 lis/c=60/60 les/c/f=61/61/0 sis=70 pruub=12.662728310s) [0] r=-1 lpr=70 pi=[60,70)/1 crt=49'39 mlcod 49'39 active pruub 147.172683716s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:23 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 70 pg[6.e( v 49'39 (0'0,49'39] local-lis/les=60/61 n=1 ec=52/24 lis/c=60/60 les/c/f=61/61/0 sis=70 pruub=12.662653923s) [0] r=-1 lpr=70 pi=[60,70)/1 crt=49'39 mlcod 0'0 unknown NOTIFY pruub 147.172683716s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:23 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 70 pg[6.6( v 49'39 (0'0,49'39] local-lis/les=60/61 n=2 ec=52/24 lis/c=60/60 les/c/f=61/61/0 sis=70 pruub=12.653261185s) [0] r=-1 lpr=70 pi=[60,70)/1 crt=49'39 mlcod 49'39 active pruub 147.164184570s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:23 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 70 pg[6.6( v 49'39 (0'0,49'39] local-lis/les=60/61 n=2 ec=52/24 lis/c=60/60 les/c/f=61/61/0 sis=70 pruub=12.653130531s) [0] r=-1 lpr=70 pi=[60,70)/1 crt=49'39 mlcod 0'0 unknown NOTIFY pruub 147.164184570s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:23 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 70 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=70) [1] r=0 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:23 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 70 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=70) [1] r=0 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:23 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 70 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=70) [1] r=0 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:23 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 70 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=70) [1] r=0 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:23 compute-0 podman[96323]: 2026-01-21 23:27:23.832373555 +0000 UTC m=+0.057684512 container create b5027aaa8284c1a3318239cef7c4f4a0f23066a2c7ee266969c36005fb9ba578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:27:23 compute-0 systemd[1]: Started libpod-conmon-b5027aaa8284c1a3318239cef7c4f4a0f23066a2c7ee266969c36005fb9ba578.scope.
Jan 21 23:27:23 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 21 23:27:23 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 21 23:27:23 compute-0 podman[96323]: 2026-01-21 23:27:23.804833388 +0000 UTC m=+0.030144385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:27:23 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:27:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:23.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:27:23 compute-0 podman[96323]: 2026-01-21 23:27:23.925284202 +0000 UTC m=+0.150595199 container init b5027aaa8284c1a3318239cef7c4f4a0f23066a2c7ee266969c36005fb9ba578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_banach, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:27:23 compute-0 podman[96323]: 2026-01-21 23:27:23.937294915 +0000 UTC m=+0.162605862 container start b5027aaa8284c1a3318239cef7c4f4a0f23066a2c7ee266969c36005fb9ba578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_banach, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 23:27:23 compute-0 recursing_banach[96340]: 167 167
Jan 21 23:27:23 compute-0 systemd[1]: libpod-b5027aaa8284c1a3318239cef7c4f4a0f23066a2c7ee266969c36005fb9ba578.scope: Deactivated successfully.
Jan 21 23:27:23 compute-0 conmon[96340]: conmon b5027aaa8284c1a33182 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b5027aaa8284c1a3318239cef7c4f4a0f23066a2c7ee266969c36005fb9ba578.scope/container/memory.events
Jan 21 23:27:23 compute-0 podman[96323]: 2026-01-21 23:27:23.95438821 +0000 UTC m=+0.179699147 container attach b5027aaa8284c1a3318239cef7c4f4a0f23066a2c7ee266969c36005fb9ba578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_banach, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:27:23 compute-0 podman[96323]: 2026-01-21 23:27:23.955619842 +0000 UTC m=+0.180930869 container died b5027aaa8284c1a3318239cef7c4f4a0f23066a2c7ee266969c36005fb9ba578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Jan 21 23:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d6a859ec66508bf84027dd2d831b94f7da420f2fe310c8c54c4fe37d62e83f8-merged.mount: Deactivated successfully.
Jan 21 23:27:24 compute-0 podman[96323]: 2026-01-21 23:27:24.140261767 +0000 UTC m=+0.365572714 container remove b5027aaa8284c1a3318239cef7c4f4a0f23066a2c7ee266969c36005fb9ba578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_banach, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:27:24 compute-0 systemd[1]: libpod-conmon-b5027aaa8284c1a3318239cef7c4f4a0f23066a2c7ee266969c36005fb9ba578.scope: Deactivated successfully.
Jan 21 23:27:24 compute-0 sudo[96282]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:27:24 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:27:24 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:24 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.boqcsl (monmap changed)...
Jan 21 23:27:24 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.boqcsl (monmap changed)...
Jan 21 23:27:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.boqcsl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 21 23:27:24 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.boqcsl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 23:27:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 21 23:27:24 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 21 23:27:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:27:24 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:24 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.boqcsl on compute-0
Jan 21 23:27:24 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.boqcsl on compute-0
Jan 21 23:27:24 compute-0 sudo[96360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:24 compute-0 sudo[96360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:24 compute-0 sudo[96360]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:24 compute-0 sudo[96385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:27:24 compute-0 sudo[96385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:24 compute-0 sudo[96385]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 4 unknown, 301 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 103 B/s, 3 objects/s recovering
Jan 21 23:27:24 compute-0 sudo[96410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:24 compute-0 sudo[96410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:24 compute-0 sudo[96410]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 21 23:27:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 21 23:27:24 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 21 23:27:24 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 71 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[54,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:24 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 71 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[54,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:24 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 71 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[54,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:24 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 71 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[54,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:24 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 71 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[54,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:24 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 71 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[54,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:24 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 71 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[54,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:24 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 71 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[54,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:24 compute-0 ceph-mon[74318]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 21 23:27:24 compute-0 ceph-mon[74318]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 23:27:24 compute-0 ceph-mon[74318]: 2.1f scrub starts
Jan 21 23:27:24 compute-0 ceph-mon[74318]: 2.1f scrub ok
Jan 21 23:27:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 21 23:27:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 21 23:27:24 compute-0 ceph-mon[74318]: osdmap e70: 3 total, 3 up, 3 in
Jan 21 23:27:24 compute-0 ceph-mon[74318]: 3.16 scrub starts
Jan 21 23:27:24 compute-0 ceph-mon[74318]: 3.16 scrub ok
Jan 21 23:27:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.boqcsl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 23:27:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 21 23:27:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:24 compute-0 sudo[96435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:27:24 compute-0 sudo[96435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:24.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:24 compute-0 podman[96475]: 2026-01-21 23:27:24.916728081 +0000 UTC m=+0.058177764 container create 37e51cfd3745d5b7b89d05edd92af7dd1492dfbdb3265f0a4d596f0a90ef4563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 23:27:24 compute-0 systemd[1]: Started libpod-conmon-37e51cfd3745d5b7b89d05edd92af7dd1492dfbdb3265f0a4d596f0a90ef4563.scope.
Jan 21 23:27:24 compute-0 podman[96475]: 2026-01-21 23:27:24.895643103 +0000 UTC m=+0.037092776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:27:24 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:25 compute-0 podman[96475]: 2026-01-21 23:27:25.009801494 +0000 UTC m=+0.151251227 container init 37e51cfd3745d5b7b89d05edd92af7dd1492dfbdb3265f0a4d596f0a90ef4563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 21 23:27:25 compute-0 podman[96475]: 2026-01-21 23:27:25.015907243 +0000 UTC m=+0.157356936 container start 37e51cfd3745d5b7b89d05edd92af7dd1492dfbdb3265f0a4d596f0a90ef4563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 21 23:27:25 compute-0 suspicious_hertz[96491]: 167 167
Jan 21 23:27:25 compute-0 podman[96475]: 2026-01-21 23:27:25.020129252 +0000 UTC m=+0.161578935 container attach 37e51cfd3745d5b7b89d05edd92af7dd1492dfbdb3265f0a4d596f0a90ef4563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 21 23:27:25 compute-0 systemd[1]: libpod-37e51cfd3745d5b7b89d05edd92af7dd1492dfbdb3265f0a4d596f0a90ef4563.scope: Deactivated successfully.
Jan 21 23:27:25 compute-0 podman[96475]: 2026-01-21 23:27:25.020967424 +0000 UTC m=+0.162417107 container died 37e51cfd3745d5b7b89d05edd92af7dd1492dfbdb3265f0a4d596f0a90ef4563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 23:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3433692385475ca0189191456de76912a60b6ce8dc09ce7abd7187e39835b00-merged.mount: Deactivated successfully.
Jan 21 23:27:25 compute-0 podman[96475]: 2026-01-21 23:27:25.076183501 +0000 UTC m=+0.217633164 container remove 37e51cfd3745d5b7b89d05edd92af7dd1492dfbdb3265f0a4d596f0a90ef4563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:27:25 compute-0 systemd[1]: libpod-conmon-37e51cfd3745d5b7b89d05edd92af7dd1492dfbdb3265f0a4d596f0a90ef4563.scope: Deactivated successfully.
Jan 21 23:27:25 compute-0 sudo[96435]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:27:25 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:27:25 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:25 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Jan 21 23:27:25 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Jan 21 23:27:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 21 23:27:25 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 23:27:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:27:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:25 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Jan 21 23:27:25 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Jan 21 23:27:25 compute-0 sudo[96513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:25 compute-0 sudo[96513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:25 compute-0 sudo[96513]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:25 compute-0 sudo[96538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:27:25 compute-0 sudo[96538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:25 compute-0 sudo[96538]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:25 compute-0 sudo[96563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:25 compute-0 sudo[96563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:25 compute-0 sudo[96563]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:25 compute-0 sudo[96588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:27:25 compute-0 sudo[96588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 21 23:27:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 21 23:27:25 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 21 23:27:25 compute-0 ceph-mon[74318]: Reconfiguring mgr.compute-0.boqcsl (monmap changed)...
Jan 21 23:27:25 compute-0 ceph-mon[74318]: Reconfiguring daemon mgr.compute-0.boqcsl on compute-0
Jan 21 23:27:25 compute-0 ceph-mon[74318]: pgmap v176: 305 pgs: 4 unknown, 301 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 103 B/s, 3 objects/s recovering
Jan 21 23:27:25 compute-0 ceph-mon[74318]: osdmap e71: 3 total, 3 up, 3 in
Jan 21 23:27:25 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:25 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:25 compute-0 ceph-mon[74318]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 21 23:27:25 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 23:27:25 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:25 compute-0 ceph-mon[74318]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 21 23:27:25 compute-0 ceph-mon[74318]: osdmap e72: 3 total, 3 up, 3 in
Jan 21 23:27:25 compute-0 podman[96630]: 2026-01-21 23:27:25.746142284 +0000 UTC m=+0.050003462 container create 45b64ecd193cac46fe7cd88f9bfeb4636dffaa6a96ce27b69272bd45065f6799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:27:25 compute-0 systemd[1]: Started libpod-conmon-45b64ecd193cac46fe7cd88f9bfeb4636dffaa6a96ce27b69272bd45065f6799.scope.
Jan 21 23:27:25 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:25 compute-0 podman[96630]: 2026-01-21 23:27:25.724947802 +0000 UTC m=+0.028809000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:27:25 compute-0 podman[96630]: 2026-01-21 23:27:25.834822751 +0000 UTC m=+0.138683949 container init 45b64ecd193cac46fe7cd88f9bfeb4636dffaa6a96ce27b69272bd45065f6799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 21 23:27:25 compute-0 podman[96630]: 2026-01-21 23:27:25.844879344 +0000 UTC m=+0.148740512 container start 45b64ecd193cac46fe7cd88f9bfeb4636dffaa6a96ce27b69272bd45065f6799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 23:27:25 compute-0 podman[96630]: 2026-01-21 23:27:25.849625617 +0000 UTC m=+0.153486835 container attach 45b64ecd193cac46fe7cd88f9bfeb4636dffaa6a96ce27b69272bd45065f6799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Jan 21 23:27:25 compute-0 stupefied_williamson[96646]: 167 167
Jan 21 23:27:25 compute-0 systemd[1]: libpod-45b64ecd193cac46fe7cd88f9bfeb4636dffaa6a96ce27b69272bd45065f6799.scope: Deactivated successfully.
Jan 21 23:27:25 compute-0 podman[96630]: 2026-01-21 23:27:25.851355951 +0000 UTC m=+0.155217159 container died 45b64ecd193cac46fe7cd88f9bfeb4636dffaa6a96ce27b69272bd45065f6799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 21 23:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-d530faf0f820683d0629851208a4cf124c84251e109f6f4762891fc009f38bba-merged.mount: Deactivated successfully.
Jan 21 23:27:25 compute-0 podman[96630]: 2026-01-21 23:27:25.905732557 +0000 UTC m=+0.209593755 container remove 45b64ecd193cac46fe7cd88f9bfeb4636dffaa6a96ce27b69272bd45065f6799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:27:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:25.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:25 compute-0 systemd[1]: libpod-conmon-45b64ecd193cac46fe7cd88f9bfeb4636dffaa6a96ce27b69272bd45065f6799.scope: Deactivated successfully.
Jan 21 23:27:26 compute-0 sudo[96588]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:26 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:27:26 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:26 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:27:26 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:26 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Jan 21 23:27:26 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Jan 21 23:27:26 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 21 23:27:26 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 21 23:27:26 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:27:26 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:26 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Jan 21 23:27:26 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Jan 21 23:27:26 compute-0 sudo[96665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:26 compute-0 sudo[96665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:26 compute-0 sudo[96665]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:26 compute-0 sudo[96690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:27:26 compute-0 sudo[96690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:26 compute-0 sudo[96690]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:26 compute-0 sudo[96715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:26 compute-0 sudo[96715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:26 compute-0 sudo[96715]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 4 unknown, 301 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 98 B/s, 4 objects/s recovering
Jan 21 23:27:26 compute-0 sudo[96740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3759241a-7f1c-520d-ba17-879943ee2f00
Jan 21 23:27:26 compute-0 sudo[96740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:26 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 21 23:27:26 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 21 23:27:26 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 21 23:27:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 73 pg[9.16( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=5 ec=54/42 lis/c=71/54 les/c/f=72/55/0 sis=73) [1] r=0 lpr=73 pi=[54,73)/1 luod=0'0 crt=49'1136 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 73 pg[9.16( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=5 ec=54/42 lis/c=71/54 les/c/f=72/55/0 sis=73) [1] r=0 lpr=73 pi=[54,73)/1 crt=49'1136 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 73 pg[9.e( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=6 ec=54/42 lis/c=71/54 les/c/f=72/55/0 sis=73) [1] r=0 lpr=73 pi=[54,73)/1 luod=0'0 crt=49'1136 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 73 pg[9.e( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=6 ec=54/42 lis/c=71/54 les/c/f=72/55/0 sis=73) [1] r=0 lpr=73 pi=[54,73)/1 crt=49'1136 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 73 pg[9.6( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=6 ec=54/42 lis/c=71/54 les/c/f=72/55/0 sis=73) [1] r=0 lpr=73 pi=[54,73)/1 luod=0'0 crt=49'1136 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 73 pg[9.6( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=6 ec=54/42 lis/c=71/54 les/c/f=72/55/0 sis=73) [1] r=0 lpr=73 pi=[54,73)/1 crt=49'1136 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 73 pg[9.1e( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=5 ec=54/42 lis/c=71/54 les/c/f=72/55/0 sis=73) [1] r=0 lpr=73 pi=[54,73)/1 luod=0'0 crt=49'1136 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:26 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 73 pg[9.1e( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=5 ec=54/42 lis/c=71/54 les/c/f=72/55/0 sis=73) [1] r=0 lpr=73 pi=[54,73)/1 crt=49'1136 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:26.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:26 compute-0 podman[96781]: 2026-01-21 23:27:26.836741542 +0000 UTC m=+0.065398212 container create f5f43be6db13ce856fff3e9372778656886a1ed59673c09598e70215438bc521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bouman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 21 23:27:26 compute-0 systemd[1]: Started libpod-conmon-f5f43be6db13ce856fff3e9372778656886a1ed59673c09598e70215438bc521.scope.
Jan 21 23:27:26 compute-0 podman[96781]: 2026-01-21 23:27:26.807362808 +0000 UTC m=+0.036019528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:27:26 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:26 compute-0 podman[96781]: 2026-01-21 23:27:26.933372597 +0000 UTC m=+0.162029267 container init f5f43be6db13ce856fff3e9372778656886a1ed59673c09598e70215438bc521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bouman, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 23:27:26 compute-0 podman[96781]: 2026-01-21 23:27:26.939899076 +0000 UTC m=+0.168555716 container start f5f43be6db13ce856fff3e9372778656886a1ed59673c09598e70215438bc521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bouman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:27:26 compute-0 podman[96781]: 2026-01-21 23:27:26.943420928 +0000 UTC m=+0.172077588 container attach f5f43be6db13ce856fff3e9372778656886a1ed59673c09598e70215438bc521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:27:26 compute-0 crazy_bouman[96797]: 167 167
Jan 21 23:27:26 compute-0 systemd[1]: libpod-f5f43be6db13ce856fff3e9372778656886a1ed59673c09598e70215438bc521.scope: Deactivated successfully.
Jan 21 23:27:26 compute-0 podman[96781]: 2026-01-21 23:27:26.946729795 +0000 UTC m=+0.175386465 container died f5f43be6db13ce856fff3e9372778656886a1ed59673c09598e70215438bc521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bouman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:27:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e07ff9aaa5088d59d6eff6e79e486150f03a8d32b533791cd8b728ffe9edab6-merged.mount: Deactivated successfully.
Jan 21 23:27:27 compute-0 podman[96781]: 2026-01-21 23:27:27.000268008 +0000 UTC m=+0.228924658 container remove f5f43be6db13ce856fff3e9372778656886a1ed59673c09598e70215438bc521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:27:27 compute-0 systemd[1]: libpod-conmon-f5f43be6db13ce856fff3e9372778656886a1ed59673c09598e70215438bc521.scope: Deactivated successfully.
Jan 21 23:27:27 compute-0 sudo[96740]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:27:27 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:27:27 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:27 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Jan 21 23:27:27 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Jan 21 23:27:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 21 23:27:27 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 23:27:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:27:27 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:27 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Jan 21 23:27:27 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Jan 21 23:27:27 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:27 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:27 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 21 23:27:27 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:27 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 23:27:27 compute-0 ceph-mon[74318]: 2.13 scrub starts
Jan 21 23:27:27 compute-0 ceph-mon[74318]: osdmap e73: 3 total, 3 up, 3 in
Jan 21 23:27:27 compute-0 ceph-mon[74318]: 2.13 scrub ok
Jan 21 23:27:27 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:27 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:27 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 21 23:27:27 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:27:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 21 23:27:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 21 23:27:27 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 21 23:27:27 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 74 pg[9.6( v 49'1136 (0'0,49'1136] local-lis/les=73/74 n=6 ec=54/42 lis/c=71/54 les/c/f=72/55/0 sis=73) [1] r=0 lpr=73 pi=[54,73)/1 crt=49'1136 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:27 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 74 pg[9.e( v 49'1136 (0'0,49'1136] local-lis/les=73/74 n=6 ec=54/42 lis/c=71/54 les/c/f=72/55/0 sis=73) [1] r=0 lpr=73 pi=[54,73)/1 crt=49'1136 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:27 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 74 pg[9.1e( v 49'1136 (0'0,49'1136] local-lis/les=73/74 n=5 ec=54/42 lis/c=71/54 les/c/f=72/55/0 sis=73) [1] r=0 lpr=73 pi=[54,73)/1 crt=49'1136 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:27 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 74 pg[9.16( v 49'1136 (0'0,49'1136] local-lis/les=73/74 n=5 ec=54/42 lis/c=71/54 les/c/f=72/55/0 sis=73) [1] r=0 lpr=73 pi=[54,73)/1 crt=49'1136 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:27:27 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:27:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:27:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:27.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:27:27 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:27 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Jan 21 23:27:27 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Jan 21 23:27:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 21 23:27:27 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 21 23:27:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:27:27 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:27 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Jan 21 23:27:27 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Jan 21 23:27:28 compute-0 ceph-mon[74318]: Reconfiguring osd.1 (monmap changed)...
Jan 21 23:27:28 compute-0 ceph-mon[74318]: Reconfiguring daemon osd.1 on compute-0
Jan 21 23:27:28 compute-0 ceph-mon[74318]: pgmap v179: 305 pgs: 4 unknown, 301 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 98 B/s, 4 objects/s recovering
Jan 21 23:27:28 compute-0 ceph-mon[74318]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 21 23:27:28 compute-0 ceph-mon[74318]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 21 23:27:28 compute-0 ceph-mon[74318]: 2.9 scrub starts
Jan 21 23:27:28 compute-0 ceph-mon[74318]: 2.9 scrub ok
Jan 21 23:27:28 compute-0 ceph-mon[74318]: 2.15 scrub starts
Jan 21 23:27:28 compute-0 ceph-mon[74318]: osdmap e74: 3 total, 3 up, 3 in
Jan 21 23:27:28 compute-0 ceph-mon[74318]: 2.15 scrub ok
Jan 21 23:27:28 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:28 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:28 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 21 23:27:28 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:28.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v182: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 826 B/s wr, 72 op/s; 44 B/s, 4 objects/s recovering
Jan 21 23:27:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:27:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Jan 21 23:27:29 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 21 23:27:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Jan 21 23:27:29 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 21 23:27:29 compute-0 ceph-mon[74318]: Reconfiguring osd.0 (monmap changed)...
Jan 21 23:27:29 compute-0 ceph-mon[74318]: Reconfiguring daemon osd.0 on compute-1
Jan 21 23:27:29 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:27:29 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:29 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Jan 21 23:27:29 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Jan 21 23:27:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 21 23:27:29 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 23:27:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 21 23:27:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 21 23:27:29 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 21 23:27:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:27:29 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:29 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Jan 21 23:27:29 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Jan 21 23:27:29 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 21 23:27:29 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 21 23:27:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 21 23:27:29 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 21 23:27:29 compute-0 sudo[96850]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsqbafjnucqgtitpkdgpazdtpcqevqwr ; /usr/bin/python3'
Jan 21 23:27:29 compute-0 sudo[96850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:27:29 compute-0 python3[96852]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:27:29 compute-0 podman[96853]: 2026-01-21 23:27:29.870761111 +0000 UTC m=+0.062872758 container create 474c310cd9bcaa4f36b46f6894415d1c0b2df9eced18885df3571efa00a3ab23 (image=quay.io/ceph/ceph:v18, name=amazing_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:27:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:29.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:29 compute-0 systemd[1]: Started libpod-conmon-474c310cd9bcaa4f36b46f6894415d1c0b2df9eced18885df3571efa00a3ab23.scope.
Jan 21 23:27:29 compute-0 podman[96853]: 2026-01-21 23:27:29.836402637 +0000 UTC m=+0.028514354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:27:29 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c21c204567500653ac340cdd64383967abeb1817b2be44d2b8c7bf6e06ce059/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c21c204567500653ac340cdd64383967abeb1817b2be44d2b8c7bf6e06ce059/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:29 compute-0 podman[96853]: 2026-01-21 23:27:29.98487677 +0000 UTC m=+0.176988397 container init 474c310cd9bcaa4f36b46f6894415d1c0b2df9eced18885df3571efa00a3ab23 (image=quay.io/ceph/ceph:v18, name=amazing_robinson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 21 23:27:29 compute-0 podman[96853]: 2026-01-21 23:27:29.996471882 +0000 UTC m=+0.188583489 container start 474c310cd9bcaa4f36b46f6894415d1c0b2df9eced18885df3571efa00a3ab23 (image=quay.io/ceph/ceph:v18, name=amazing_robinson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:27:29 compute-0 podman[96853]: 2026-01-21 23:27:29.999727227 +0000 UTC m=+0.191838834 container attach 474c310cd9bcaa4f36b46f6894415d1c0b2df9eced18885df3571efa00a3ab23 (image=quay.io/ceph/ceph:v18, name=amazing_robinson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 23:27:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:27:30 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:27:30 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:30 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Jan 21 23:27:30 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Jan 21 23:27:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 21 23:27:30 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 23:27:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 21 23:27:30 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 21 23:27:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:27:30 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:30 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Jan 21 23:27:30 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Jan 21 23:27:30 compute-0 ceph-mon[74318]: pgmap v182: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 826 B/s wr, 72 op/s; 44 B/s, 4 objects/s recovering
Jan 21 23:27:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 21 23:27:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 21 23:27:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:30 compute-0 ceph-mon[74318]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 21 23:27:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 23:27:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 21 23:27:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:30 compute-0 ceph-mon[74318]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 21 23:27:30 compute-0 ceph-mon[74318]: 2.c scrub starts
Jan 21 23:27:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 21 23:27:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 21 23:27:30 compute-0 ceph-mon[74318]: osdmap e75: 3 total, 3 up, 3 in
Jan 21 23:27:30 compute-0 ceph-mon[74318]: 2.c scrub ok
Jan 21 23:27:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 21 23:27:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 21 23:27:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:30 compute-0 amazing_robinson[96869]: could not fetch user info: no user info saved
Jan 21 23:27:30 compute-0 systemd[1]: libpod-474c310cd9bcaa4f36b46f6894415d1c0b2df9eced18885df3571efa00a3ab23.scope: Deactivated successfully.
Jan 21 23:27:30 compute-0 podman[96853]: 2026-01-21 23:27:30.558959008 +0000 UTC m=+0.751070655 container died 474c310cd9bcaa4f36b46f6894415d1c0b2df9eced18885df3571efa00a3ab23 (image=quay.io/ceph/ceph:v18, name=amazing_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c21c204567500653ac340cdd64383967abeb1817b2be44d2b8c7bf6e06ce059-merged.mount: Deactivated successfully.
Jan 21 23:27:30 compute-0 podman[96853]: 2026-01-21 23:27:30.610497219 +0000 UTC m=+0.802608826 container remove 474c310cd9bcaa4f36b46f6894415d1c0b2df9eced18885df3571efa00a3ab23 (image=quay.io/ceph/ceph:v18, name=amazing_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:27:30 compute-0 systemd[1]: libpod-conmon-474c310cd9bcaa4f36b46f6894415d1c0b2df9eced18885df3571efa00a3ab23.scope: Deactivated successfully.
Jan 21 23:27:30 compute-0 sudo[96850]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:30.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:30 compute-0 sudo[96989]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mogthmqzcvwllniqljpcssasjqknnofu ; /usr/bin/python3'
Jan 21 23:27:30 compute-0 sudo[96989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:27:30 compute-0 python3[96991]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:27:31 compute-0 podman[96992]: 2026-01-21 23:27:31.102175243 +0000 UTC m=+0.084117660 container create fde5417da6a08403b4991165d411f9709b2c0b01bb0dfa5ded164494bde7f720 (image=quay.io/ceph/ceph:v18, name=stoic_fermat, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 23:27:31 compute-0 systemd[1]: Started libpod-conmon-fde5417da6a08403b4991165d411f9709b2c0b01bb0dfa5ded164494bde7f720.scope.
Jan 21 23:27:31 compute-0 podman[96992]: 2026-01-21 23:27:31.062699436 +0000 UTC m=+0.044641943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 21 23:27:31 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4487381e0404a9c387f3ea029bb9491e03cb1cce9eba65151f82f91dcfbb3d63/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4487381e0404a9c387f3ea029bb9491e03cb1cce9eba65151f82f91dcfbb3d63/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:31 compute-0 podman[96992]: 2026-01-21 23:27:31.190381778 +0000 UTC m=+0.172324205 container init fde5417da6a08403b4991165d411f9709b2c0b01bb0dfa5ded164494bde7f720 (image=quay.io/ceph/ceph:v18, name=stoic_fermat, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 21 23:27:31 compute-0 podman[96992]: 2026-01-21 23:27:31.197259587 +0000 UTC m=+0.179202004 container start fde5417da6a08403b4991165d411f9709b2c0b01bb0dfa5ded164494bde7f720 (image=quay.io/ceph/ceph:v18, name=stoic_fermat, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 23:27:31 compute-0 podman[96992]: 2026-01-21 23:27:31.202595516 +0000 UTC m=+0.184537913 container attach fde5417da6a08403b4991165d411f9709b2c0b01bb0dfa5ded164494bde7f720 (image=quay.io/ceph/ceph:v18, name=stoic_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:27:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:27:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:27:31 compute-0 stoic_fermat[97008]: {
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "user_id": "openstack",
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "display_name": "openstack",
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "email": "",
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "suspended": 0,
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "max_buckets": 1000,
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "subusers": [],
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "keys": [
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:         {
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:             "user": "openstack",
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:             "access_key": "WAVO3QWLCQIUEN83U2SM",
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:             "secret_key": "mXdm4r0v1iQauViCaauwrGQxc3vH5RZ43ur764F7"
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:         }
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     ],
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "swift_keys": [],
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "caps": [],
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "op_mask": "read, write, delete",
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "default_placement": "",
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "default_storage_class": "",
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "placement_tags": [],
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "bucket_quota": {
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:         "enabled": false,
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:         "check_on_raw": false,
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:         "max_size": -1,
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:         "max_size_kb": 0,
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:         "max_objects": -1
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     },
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "user_quota": {
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:         "enabled": false,
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:         "check_on_raw": false,
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:         "max_size": -1,
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:         "max_size_kb": 0,
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:         "max_objects": -1
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     },
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "temp_url_keys": [],
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "type": "rgw",
Jan 21 23:27:31 compute-0 stoic_fermat[97008]:     "mfa_ids": []
Jan 21 23:27:31 compute-0 stoic_fermat[97008]: }
Jan 21 23:27:31 compute-0 stoic_fermat[97008]: 
Jan 21 23:27:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:31 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.uvjsro (monmap changed)...
Jan 21 23:27:31 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.uvjsro (monmap changed)...
Jan 21 23:27:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.uvjsro", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 21 23:27:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.uvjsro", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 23:27:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 21 23:27:31 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 21 23:27:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:27:31 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:31 compute-0 ceph-mgr[74614]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.uvjsro on compute-2
Jan 21 23:27:31 compute-0 ceph-mgr[74614]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.uvjsro on compute-2
Jan 21 23:27:31 compute-0 systemd[1]: libpod-fde5417da6a08403b4991165d411f9709b2c0b01bb0dfa5ded164494bde7f720.scope: Deactivated successfully.
Jan 21 23:27:31 compute-0 podman[96992]: 2026-01-21 23:27:31.444623764 +0000 UTC m=+0.426566151 container died fde5417da6a08403b4991165d411f9709b2c0b01bb0dfa5ded164494bde7f720 (image=quay.io/ceph/ceph:v18, name=stoic_fermat, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:27:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 695 B/s wr, 60 op/s; 37 B/s, 3 objects/s recovering
Jan 21 23:27:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Jan 21 23:27:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 21 23:27:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Jan 21 23:27:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 21 23:27:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4487381e0404a9c387f3ea029bb9491e03cb1cce9eba65151f82f91dcfbb3d63-merged.mount: Deactivated successfully.
Jan 21 23:27:31 compute-0 podman[96992]: 2026-01-21 23:27:31.487822538 +0000 UTC m=+0.469764965 container remove fde5417da6a08403b4991165d411f9709b2c0b01bb0dfa5ded164494bde7f720 (image=quay.io/ceph/ceph:v18, name=stoic_fermat, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 23:27:31 compute-0 ceph-mon[74318]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 21 23:27:31 compute-0 ceph-mon[74318]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 21 23:27:31 compute-0 ceph-mon[74318]: 2.d scrub starts
Jan 21 23:27:31 compute-0 ceph-mon[74318]: 2.d scrub ok
Jan 21 23:27:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.uvjsro", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 21 23:27:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 21 23:27:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 21 23:27:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 21 23:27:31 compute-0 systemd[1]: libpod-conmon-fde5417da6a08403b4991165d411f9709b2c0b01bb0dfa5ded164494bde7f720.scope: Deactivated successfully.
Jan 21 23:27:31 compute-0 sudo[96989]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:31.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:31 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.1 deep-scrub starts
Jan 21 23:27:31 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.1 deep-scrub ok
Jan 21 23:27:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:27:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:27:32 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:27:32 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:32 compute-0 sudo[97104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:32 compute-0 sudo[97104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:32 compute-0 sudo[97104]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 21 23:27:32 compute-0 sudo[97129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:27:32 compute-0 sudo[97129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:32 compute-0 sudo[97129]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:32 compute-0 sudo[97154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:32 compute-0 sudo[97154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:32 compute-0 sudo[97154]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:32 compute-0 sudo[97179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 21 23:27:32 compute-0 sudo[97179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:32.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:32 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 21 23:27:32 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 21 23:27:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 21 23:27:32 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 21 23:27:32 compute-0 ceph-mon[74318]: Reconfiguring mgr.compute-2.uvjsro (monmap changed)...
Jan 21 23:27:32 compute-0 ceph-mon[74318]: Reconfiguring daemon mgr.compute-2.uvjsro on compute-2
Jan 21 23:27:32 compute-0 ceph-mon[74318]: pgmap v184: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 695 B/s wr, 60 op/s; 37 B/s, 3 objects/s recovering
Jan 21 23:27:32 compute-0 ceph-mon[74318]: 7.1 deep-scrub starts
Jan 21 23:27:32 compute-0 ceph-mon[74318]: 7.1 deep-scrub ok
Jan 21 23:27:32 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:32 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:32 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 76 pg[6.8( empty local-lis/les=0/0 n=0 ec=52/24 lis/c=52/52 les/c/f=53/53/0 sis=76) [1] r=0 lpr=76 pi=[52,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:33 compute-0 podman[97268]: 2026-01-21 23:27:33.113624203 +0000 UTC m=+0.059811726 container exec 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:27:33 compute-0 podman[97268]: 2026-01-21 23:27:33.230145296 +0000 UTC m=+0.176332829 container exec_died 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:27:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 597 B/s wr, 53 op/s; 32 B/s, 2 objects/s recovering
Jan 21 23:27:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Jan 21 23:27:33 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 21 23:27:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Jan 21 23:27:33 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 21 23:27:33 compute-0 sudo[97340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:33 compute-0 sudo[97340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:33 compute-0 sudo[97340]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:33 compute-0 sudo[97381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:33 compute-0 sudo[97381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:33 compute-0 sudo[97381]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 21 23:27:33 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 21 23:27:33 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 21 23:27:33 compute-0 ceph-mon[74318]: osdmap e76: 3 total, 3 up, 3 in
Jan 21 23:27:33 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 21 23:27:33 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 21 23:27:33 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 21 23:27:33 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 21 23:27:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 21 23:27:33 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 21 23:27:33 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 77 pg[6.8( v 49'39 (0'0,49'39] local-lis/les=76/77 n=1 ec=52/24 lis/c=52/52 les/c/f=53/53/0 sis=76) [1] r=0 lpr=76 pi=[52,76)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:27:33 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:27:33 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:33.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:34 compute-0 podman[97474]: 2026-01-21 23:27:34.104774954 +0000 UTC m=+0.085039273 container exec fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 21 23:27:34 compute-0 podman[97474]: 2026-01-21 23:27:34.123995614 +0000 UTC m=+0.104259943 container exec_died fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 21 23:27:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:27:34 compute-0 podman[97539]: 2026-01-21 23:27:34.453366595 +0000 UTC m=+0.086682446 container exec 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, version=2.2.4, vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, release=1793)
Jan 21 23:27:34 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:27:34 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:34 compute-0 podman[97539]: 2026-01-21 23:27:34.498289434 +0000 UTC m=+0.131605295 container exec_died 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 21 23:27:34 compute-0 sudo[97179]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:27:34 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:27:34 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:34 compute-0 sudo[97573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:34.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:34 compute-0 sudo[97573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:34 compute-0 sudo[97573]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:34 compute-0 sudo[97598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:27:34 compute-0 sudo[97598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:34 compute-0 sudo[97598]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 21 23:27:34 compute-0 ceph-mon[74318]: 2.e deep-scrub starts
Jan 21 23:27:34 compute-0 ceph-mon[74318]: pgmap v186: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 597 B/s wr, 53 op/s; 32 B/s, 2 objects/s recovering
Jan 21 23:27:34 compute-0 ceph-mon[74318]: 2.e deep-scrub ok
Jan 21 23:27:34 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 21 23:27:34 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 21 23:27:34 compute-0 ceph-mon[74318]: osdmap e77: 3 total, 3 up, 3 in
Jan 21 23:27:34 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:34 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:34 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:34 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:34 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:34 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 21 23:27:34 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 21 23:27:34 compute-0 sudo[97623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:34 compute-0 sudo[97623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:34 compute-0 sudo[97623]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:34 compute-0 sudo[97648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:27:34 compute-0 sudo[97648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:35 compute-0 sudo[97648]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 2 remapped+peering, 1 peering, 1 activating, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 345 B/s wr, 2 op/s
Jan 21 23:27:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 21 23:27:35 compute-0 ceph-mon[74318]: osdmap e78: 3 total, 3 up, 3 in
Jan 21 23:27:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 21 23:27:35 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 21 23:27:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 21 23:27:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:35.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 21 23:27:36 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 21 23:27:36 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 21 23:27:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:36.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 21 23:27:36 compute-0 ceph-mon[74318]: 2.19 scrub starts
Jan 21 23:27:36 compute-0 ceph-mon[74318]: pgmap v189: 305 pgs: 2 remapped+peering, 1 peering, 1 activating, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 345 B/s wr, 2 op/s
Jan 21 23:27:36 compute-0 ceph-mon[74318]: 2.19 scrub ok
Jan 21 23:27:36 compute-0 ceph-mon[74318]: osdmap e79: 3 total, 3 up, 3 in
Jan 21 23:27:36 compute-0 ceph-mon[74318]: 7.7 scrub starts
Jan 21 23:27:36 compute-0 ceph-mon[74318]: 7.7 scrub ok
Jan 21 23:27:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 21 23:27:36 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 21 23:27:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:27:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 3 peering, 1 activating, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s; 54 B/s, 2 objects/s recovering
Jan 21 23:27:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 21 23:27:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 21 23:27:37 compute-0 ceph-mon[74318]: 4.14 scrub starts
Jan 21 23:27:37 compute-0 ceph-mon[74318]: 4.14 scrub ok
Jan 21 23:27:37 compute-0 ceph-mon[74318]: osdmap e80: 3 total, 3 up, 3 in
Jan 21 23:27:37 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 21 23:27:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:37.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 21 23:27:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:38.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 21 23:27:39 compute-0 ceph-mon[74318]: pgmap v192: 305 pgs: 3 peering, 1 activating, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s; 54 B/s, 2 objects/s recovering
Jan 21 23:27:39 compute-0 ceph-mon[74318]: 6.4 deep-scrub starts
Jan 21 23:27:39 compute-0 ceph-mon[74318]: 6.4 deep-scrub ok
Jan 21 23:27:39 compute-0 ceph-mon[74318]: osdmap e81: 3 total, 3 up, 3 in
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:27:39
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Some PGs (0.013115) are inactive; try again later
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 3 peering, 302 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 47 B/s, 2 objects/s recovering
Jan 21 23:27:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:27:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:27:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:27:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:27:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev b7e25116-2b12-4608-a38c-c62b739d4e1f does not exist
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 376b6073-262e-4d72-8c0a-17295c22123f does not exist
Jan 21 23:27:39 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev a7ba55d1-b93b-45ac-9fd2-617632cfb916 does not exist
Jan 21 23:27:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:27:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:27:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:27:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:27:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:27:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:39 compute-0 sudo[97708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:39 compute-0 sudo[97708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:39 compute-0 sudo[97708]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:39 compute-0 sudo[97733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:27:39 compute-0 sudo[97733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:39 compute-0 sudo[97733]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:39 compute-0 sudo[97758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:39 compute-0 sudo[97758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:39 compute-0 sudo[97758]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:39.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:39 compute-0 sudo[97783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:27:39 compute-0 sudo[97783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:40 compute-0 ceph-mon[74318]: 3.0 scrub starts
Jan 21 23:27:40 compute-0 ceph-mon[74318]: 3.0 scrub ok
Jan 21 23:27:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:27:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:27:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:27:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:27:40 compute-0 podman[97846]: 2026-01-21 23:27:40.291103319 +0000 UTC m=+0.035091533 container create 1b1fc8ec9cf0e44bb44590a24a10d7a23013bafd814df1f8472ccd880144a2bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yalow, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:27:40 compute-0 systemd[1]: Started libpod-conmon-1b1fc8ec9cf0e44bb44590a24a10d7a23013bafd814df1f8472ccd880144a2bb.scope.
Jan 21 23:27:40 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:40 compute-0 podman[97846]: 2026-01-21 23:27:40.370332801 +0000 UTC m=+0.114321035 container init 1b1fc8ec9cf0e44bb44590a24a10d7a23013bafd814df1f8472ccd880144a2bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yalow, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:27:40 compute-0 podman[97846]: 2026-01-21 23:27:40.275211796 +0000 UTC m=+0.019200020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:27:40 compute-0 podman[97846]: 2026-01-21 23:27:40.378646507 +0000 UTC m=+0.122634751 container start 1b1fc8ec9cf0e44bb44590a24a10d7a23013bafd814df1f8472ccd880144a2bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yalow, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:27:40 compute-0 angry_yalow[97861]: 167 167
Jan 21 23:27:40 compute-0 systemd[1]: libpod-1b1fc8ec9cf0e44bb44590a24a10d7a23013bafd814df1f8472ccd880144a2bb.scope: Deactivated successfully.
Jan 21 23:27:40 compute-0 podman[97846]: 2026-01-21 23:27:40.383598676 +0000 UTC m=+0.127586980 container attach 1b1fc8ec9cf0e44bb44590a24a10d7a23013bafd814df1f8472ccd880144a2bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yalow, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:27:40 compute-0 podman[97846]: 2026-01-21 23:27:40.384148831 +0000 UTC m=+0.128137045 container died 1b1fc8ec9cf0e44bb44590a24a10d7a23013bafd814df1f8472ccd880144a2bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:27:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5fa6d53a4d8a2f7d25a07d4458cf7ed25c1e725ffc42df602a4bd93546452cf-merged.mount: Deactivated successfully.
Jan 21 23:27:40 compute-0 podman[97846]: 2026-01-21 23:27:40.421994886 +0000 UTC m=+0.165983080 container remove 1b1fc8ec9cf0e44bb44590a24a10d7a23013bafd814df1f8472ccd880144a2bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yalow, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 21 23:27:40 compute-0 systemd[1]: libpod-conmon-1b1fc8ec9cf0e44bb44590a24a10d7a23013bafd814df1f8472ccd880144a2bb.scope: Deactivated successfully.
Jan 21 23:27:40 compute-0 podman[97886]: 2026-01-21 23:27:40.643475869 +0000 UTC m=+0.063160865 container create 9f07da80fa2f78fa4e8becdfb81ccfceb576200c66e8a5931d72091804091abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 21 23:27:40 compute-0 systemd[1]: Started libpod-conmon-9f07da80fa2f78fa4e8becdfb81ccfceb576200c66e8a5931d72091804091abb.scope.
Jan 21 23:27:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:40.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:40 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05d51cd49539fab93c3d3c2517c44b07c2cc9dd3e4a105f20060155fd872084/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:40 compute-0 podman[97886]: 2026-01-21 23:27:40.623745136 +0000 UTC m=+0.043430182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05d51cd49539fab93c3d3c2517c44b07c2cc9dd3e4a105f20060155fd872084/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05d51cd49539fab93c3d3c2517c44b07c2cc9dd3e4a105f20060155fd872084/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05d51cd49539fab93c3d3c2517c44b07c2cc9dd3e4a105f20060155fd872084/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05d51cd49539fab93c3d3c2517c44b07c2cc9dd3e4a105f20060155fd872084/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:40 compute-0 podman[97886]: 2026-01-21 23:27:40.734330433 +0000 UTC m=+0.154015479 container init 9f07da80fa2f78fa4e8becdfb81ccfceb576200c66e8a5931d72091804091abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:27:40 compute-0 podman[97886]: 2026-01-21 23:27:40.745245507 +0000 UTC m=+0.164930503 container start 9f07da80fa2f78fa4e8becdfb81ccfceb576200c66e8a5931d72091804091abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_knuth, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 23:27:40 compute-0 podman[97886]: 2026-01-21 23:27:40.749534179 +0000 UTC m=+0.169219205 container attach 9f07da80fa2f78fa4e8becdfb81ccfceb576200c66e8a5931d72091804091abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:27:40 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 21 23:27:41 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 21 23:27:41 compute-0 ceph-mon[74318]: pgmap v194: 305 pgs: 3 peering, 302 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 47 B/s, 2 objects/s recovering
Jan 21 23:27:41 compute-0 ceph-mon[74318]: 6.c scrub starts
Jan 21 23:27:41 compute-0 ceph-mon[74318]: 6.c scrub ok
Jan 21 23:27:41 compute-0 ceph-mon[74318]: 7.c scrub starts
Jan 21 23:27:41 compute-0 ceph-mon[74318]: 7.c scrub ok
Jan 21 23:27:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 2 peering, 303 active+clean; 458 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Jan 21 23:27:41 compute-0 interesting_knuth[97902]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:27:41 compute-0 interesting_knuth[97902]: --> relative data size: 1.0
Jan 21 23:27:41 compute-0 interesting_knuth[97902]: --> All data devices are unavailable
Jan 21 23:27:41 compute-0 systemd[1]: libpod-9f07da80fa2f78fa4e8becdfb81ccfceb576200c66e8a5931d72091804091abb.scope: Deactivated successfully.
Jan 21 23:27:41 compute-0 podman[97886]: 2026-01-21 23:27:41.63925518 +0000 UTC m=+1.058940176 container died 9f07da80fa2f78fa4e8becdfb81ccfceb576200c66e8a5931d72091804091abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_knuth, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:27:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a05d51cd49539fab93c3d3c2517c44b07c2cc9dd3e4a105f20060155fd872084-merged.mount: Deactivated successfully.
Jan 21 23:27:41 compute-0 podman[97886]: 2026-01-21 23:27:41.717796084 +0000 UTC m=+1.137481090 container remove 9f07da80fa2f78fa4e8becdfb81ccfceb576200c66e8a5931d72091804091abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 23:27:41 compute-0 systemd[1]: libpod-conmon-9f07da80fa2f78fa4e8becdfb81ccfceb576200c66e8a5931d72091804091abb.scope: Deactivated successfully.
Jan 21 23:27:41 compute-0 sudo[97783]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:41 compute-0 sudo[97930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:41 compute-0 sudo[97930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:41 compute-0 sudo[97930]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:41 compute-0 sudo[97955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:27:41 compute-0 sudo[97955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:41 compute-0 sudo[97955]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:27:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:41.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:27:42 compute-0 sudo[97980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:42 compute-0 sudo[97980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:42 compute-0 sudo[97980]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:42 compute-0 sudo[98005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:27:42 compute-0 sudo[98005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:42 compute-0 ceph-mon[74318]: 8.1 scrub starts
Jan 21 23:27:42 compute-0 ceph-mon[74318]: 8.1 scrub ok
Jan 21 23:27:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:27:42 compute-0 podman[98068]: 2026-01-21 23:27:42.48050751 +0000 UTC m=+0.046655185 container create d1dc9fc4c3f794211b27fe49e5f132085889df0a14e90573b6a06f28113f67d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:27:42 compute-0 systemd[1]: Started libpod-conmon-d1dc9fc4c3f794211b27fe49e5f132085889df0a14e90573b6a06f28113f67d8.scope.
Jan 21 23:27:42 compute-0 podman[98068]: 2026-01-21 23:27:42.45778847 +0000 UTC m=+0.023936125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:27:42 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:42 compute-0 podman[98068]: 2026-01-21 23:27:42.590146013 +0000 UTC m=+0.156293738 container init d1dc9fc4c3f794211b27fe49e5f132085889df0a14e90573b6a06f28113f67d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:27:42 compute-0 podman[98068]: 2026-01-21 23:27:42.598897121 +0000 UTC m=+0.165044766 container start d1dc9fc4c3f794211b27fe49e5f132085889df0a14e90573b6a06f28113f67d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:27:42 compute-0 podman[98068]: 2026-01-21 23:27:42.602767672 +0000 UTC m=+0.168915347 container attach d1dc9fc4c3f794211b27fe49e5f132085889df0a14e90573b6a06f28113f67d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 23:27:42 compute-0 flamboyant_lederberg[98084]: 167 167
Jan 21 23:27:42 compute-0 systemd[1]: libpod-d1dc9fc4c3f794211b27fe49e5f132085889df0a14e90573b6a06f28113f67d8.scope: Deactivated successfully.
Jan 21 23:27:42 compute-0 conmon[98084]: conmon d1dc9fc4c3f794211b27 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1dc9fc4c3f794211b27fe49e5f132085889df0a14e90573b6a06f28113f67d8.scope/container/memory.events
Jan 21 23:27:42 compute-0 podman[98068]: 2026-01-21 23:27:42.606405806 +0000 UTC m=+0.172553451 container died d1dc9fc4c3f794211b27fe49e5f132085889df0a14e90573b6a06f28113f67d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 21 23:27:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-508889d18e8a1da3b66e7ea35e7824999cd06edd512a04e36c8fa11768c515bd-merged.mount: Deactivated successfully.
Jan 21 23:27:42 compute-0 podman[98068]: 2026-01-21 23:27:42.652937537 +0000 UTC m=+0.219085182 container remove d1dc9fc4c3f794211b27fe49e5f132085889df0a14e90573b6a06f28113f67d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:27:42 compute-0 systemd[1]: libpod-conmon-d1dc9fc4c3f794211b27fe49e5f132085889df0a14e90573b6a06f28113f67d8.scope: Deactivated successfully.
Jan 21 23:27:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:42.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:42 compute-0 podman[98109]: 2026-01-21 23:27:42.904969325 +0000 UTC m=+0.130342783 container create dce525fc55ffe06cfd559c6d4d8ae66d07a4c20cf836e93d5c8b56b366a28bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:27:42 compute-0 podman[98109]: 2026-01-21 23:27:42.815761084 +0000 UTC m=+0.041134532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:27:42 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 21 23:27:42 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 21 23:27:43 compute-0 systemd[1]: Started libpod-conmon-dce525fc55ffe06cfd559c6d4d8ae66d07a4c20cf836e93d5c8b56b366a28bb4.scope.
Jan 21 23:27:43 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7dbb66e544bf8291221e234feb5b548c712890c87b30942b8760ae03d009aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7dbb66e544bf8291221e234feb5b548c712890c87b30942b8760ae03d009aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7dbb66e544bf8291221e234feb5b548c712890c87b30942b8760ae03d009aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7dbb66e544bf8291221e234feb5b548c712890c87b30942b8760ae03d009aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 305 active+clean; 458 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 72 B/s, 3 objects/s recovering
Jan 21 23:27:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Jan 21 23:27:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 21 23:27:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Jan 21 23:27:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 21 23:27:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 21 23:27:43 compute-0 ceph-mon[74318]: pgmap v195: 305 pgs: 2 peering, 303 active+clean; 458 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Jan 21 23:27:43 compute-0 ceph-mon[74318]: 8.7 scrub starts
Jan 21 23:27:43 compute-0 ceph-mon[74318]: 8.7 scrub ok
Jan 21 23:27:43 compute-0 ceph-mon[74318]: 2.a scrub starts
Jan 21 23:27:43 compute-0 ceph-mon[74318]: 2.a scrub ok
Jan 21 23:27:43 compute-0 ceph-mon[74318]: 7.d scrub starts
Jan 21 23:27:43 compute-0 ceph-mon[74318]: 7.d scrub ok
Jan 21 23:27:43 compute-0 podman[98109]: 2026-01-21 23:27:43.749414318 +0000 UTC m=+0.974787846 container init dce525fc55ffe06cfd559c6d4d8ae66d07a4c20cf836e93d5c8b56b366a28bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Jan 21 23:27:43 compute-0 podman[98109]: 2026-01-21 23:27:43.760427195 +0000 UTC m=+0.985800653 container start dce525fc55ffe06cfd559c6d4d8ae66d07a4c20cf836e93d5c8b56b366a28bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 23:27:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 21 23:27:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 21 23:27:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 21 23:27:43 compute-0 podman[98109]: 2026-01-21 23:27:43.765475446 +0000 UTC m=+0.990848914 container attach dce525fc55ffe06cfd559c6d4d8ae66d07a4c20cf836e93d5c8b56b366a28bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 21 23:27:43 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 21 23:27:43 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 82 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=82) [1] r=0 lpr=82 pi=[54,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:43 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 82 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=82) [1] r=0 lpr=82 pi=[54,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:27:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:43.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:27:43 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 21 23:27:43 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 21 23:27:44 compute-0 lucid_hopper[98130]: {
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:     "1": [
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:         {
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:             "devices": [
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:                 "/dev/loop3"
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:             ],
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:             "lv_name": "ceph_lv0",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:             "lv_size": "7511998464",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:             "name": "ceph_lv0",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:             "tags": {
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:                 "ceph.cluster_name": "ceph",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:                 "ceph.crush_device_class": "",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:                 "ceph.encrypted": "0",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:                 "ceph.osd_id": "1",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:                 "ceph.type": "block",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:                 "ceph.vdo": "0"
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:             },
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:             "type": "block",
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:             "vg_name": "ceph_vg0"
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:         }
Jan 21 23:27:44 compute-0 lucid_hopper[98130]:     ]
Jan 21 23:27:44 compute-0 lucid_hopper[98130]: }
Jan 21 23:27:44 compute-0 systemd[1]: libpod-dce525fc55ffe06cfd559c6d4d8ae66d07a4c20cf836e93d5c8b56b366a28bb4.scope: Deactivated successfully.
Jan 21 23:27:44 compute-0 podman[98109]: 2026-01-21 23:27:44.540950234 +0000 UTC m=+1.766323682 container died dce525fc55ffe06cfd559c6d4d8ae66d07a4c20cf836e93d5c8b56b366a28bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:27:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c7dbb66e544bf8291221e234feb5b548c712890c87b30942b8760ae03d009aa-merged.mount: Deactivated successfully.
Jan 21 23:27:44 compute-0 podman[98109]: 2026-01-21 23:27:44.60763368 +0000 UTC m=+1.833007128 container remove dce525fc55ffe06cfd559c6d4d8ae66d07a4c20cf836e93d5c8b56b366a28bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 23:27:44 compute-0 systemd[1]: libpod-conmon-dce525fc55ffe06cfd559c6d4d8ae66d07a4c20cf836e93d5c8b56b366a28bb4.scope: Deactivated successfully.
Jan 21 23:27:44 compute-0 sudo[98005]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:44.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:44 compute-0 sudo[98153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:44 compute-0 sudo[98153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:44 compute-0 sudo[98153]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:44 compute-0 ceph-mon[74318]: 8.e scrub starts
Jan 21 23:27:44 compute-0 ceph-mon[74318]: 8.e scrub ok
Jan 21 23:27:44 compute-0 ceph-mon[74318]: pgmap v196: 305 pgs: 305 active+clean; 458 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 72 B/s, 3 objects/s recovering
Jan 21 23:27:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 21 23:27:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 21 23:27:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 21 23:27:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 21 23:27:44 compute-0 ceph-mon[74318]: osdmap e82: 3 total, 3 up, 3 in
Jan 21 23:27:44 compute-0 ceph-mon[74318]: 7.12 scrub starts
Jan 21 23:27:44 compute-0 ceph-mon[74318]: 7.12 scrub ok
Jan 21 23:27:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 21 23:27:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 21 23:27:44 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 21 23:27:44 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 83 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=83) [1]/[0] r=-1 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:44 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 83 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=83) [1]/[0] r=-1 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:44 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 83 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=83) [1]/[0] r=-1 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:44 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 83 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=83) [1]/[0] r=-1 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:44 compute-0 sudo[98178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:27:44 compute-0 sudo[98178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:44 compute-0 sudo[98178]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:44 compute-0 sudo[98203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:44 compute-0 sudo[98203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:44 compute-0 sudo[98203]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:44 compute-0 sudo[98228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:27:44 compute-0 sudo[98228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:45 compute-0 podman[98294]: 2026-01-21 23:27:45.355831839 +0000 UTC m=+0.049123029 container create c0a9fa1fa0c937563eea6cc8f8757216d0cd44ddedf0770e95710cd3e0a54433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 23:27:45 compute-0 systemd[1]: Started libpod-conmon-c0a9fa1fa0c937563eea6cc8f8757216d0cd44ddedf0770e95710cd3e0a54433.scope.
Jan 21 23:27:45 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:45 compute-0 podman[98294]: 2026-01-21 23:27:45.332513853 +0000 UTC m=+0.025805053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:27:45 compute-0 podman[98294]: 2026-01-21 23:27:45.437924405 +0000 UTC m=+0.131215615 container init c0a9fa1fa0c937563eea6cc8f8757216d0cd44ddedf0770e95710cd3e0a54433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 21 23:27:45 compute-0 podman[98294]: 2026-01-21 23:27:45.449096096 +0000 UTC m=+0.142387276 container start c0a9fa1fa0c937563eea6cc8f8757216d0cd44ddedf0770e95710cd3e0a54433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 21 23:27:45 compute-0 podman[98294]: 2026-01-21 23:27:45.452738301 +0000 UTC m=+0.146029501 container attach c0a9fa1fa0c937563eea6cc8f8757216d0cd44ddedf0770e95710cd3e0a54433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 21 23:27:45 compute-0 determined_lederberg[98310]: 167 167
Jan 21 23:27:45 compute-0 systemd[1]: libpod-c0a9fa1fa0c937563eea6cc8f8757216d0cd44ddedf0770e95710cd3e0a54433.scope: Deactivated successfully.
Jan 21 23:27:45 compute-0 podman[98294]: 2026-01-21 23:27:45.455509403 +0000 UTC m=+0.148800563 container died c0a9fa1fa0c937563eea6cc8f8757216d0cd44ddedf0770e95710cd3e0a54433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lederberg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:27:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 43 B/s, 1 objects/s recovering
Jan 21 23:27:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd26893f37a14458a088921547acdca0c250cf7e28a05649f9c35e14957ff2f6-merged.mount: Deactivated successfully.
Jan 21 23:27:45 compute-0 podman[98294]: 2026-01-21 23:27:45.498907632 +0000 UTC m=+0.192198802 container remove c0a9fa1fa0c937563eea6cc8f8757216d0cd44ddedf0770e95710cd3e0a54433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 23:27:45 compute-0 systemd[1]: libpod-conmon-c0a9fa1fa0c937563eea6cc8f8757216d0cd44ddedf0770e95710cd3e0a54433.scope: Deactivated successfully.
Jan 21 23:27:45 compute-0 podman[98334]: 2026-01-21 23:27:45.665932659 +0000 UTC m=+0.060102076 container create 42dcbe9ce90def2a4b22989c006bed80d2b4f7725774fac8d543062d955f1598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 21 23:27:45 compute-0 systemd[1]: Started libpod-conmon-42dcbe9ce90def2a4b22989c006bed80d2b4f7725774fac8d543062d955f1598.scope.
Jan 21 23:27:45 compute-0 podman[98334]: 2026-01-21 23:27:45.641348728 +0000 UTC m=+0.035518145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:27:45 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b9be156c2bd8c1c27977f2e3032096611fdcc37d9e197cb8405f69334a29bcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b9be156c2bd8c1c27977f2e3032096611fdcc37d9e197cb8405f69334a29bcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b9be156c2bd8c1c27977f2e3032096611fdcc37d9e197cb8405f69334a29bcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b9be156c2bd8c1c27977f2e3032096611fdcc37d9e197cb8405f69334a29bcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:27:45 compute-0 podman[98334]: 2026-01-21 23:27:45.771866585 +0000 UTC m=+0.166036012 container init 42dcbe9ce90def2a4b22989c006bed80d2b4f7725774fac8d543062d955f1598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 21 23:27:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 21 23:27:45 compute-0 ceph-mon[74318]: 2.10 scrub starts
Jan 21 23:27:45 compute-0 ceph-mon[74318]: 2.10 scrub ok
Jan 21 23:27:45 compute-0 ceph-mon[74318]: osdmap e83: 3 total, 3 up, 3 in
Jan 21 23:27:45 compute-0 podman[98334]: 2026-01-21 23:27:45.783471427 +0000 UTC m=+0.177640854 container start 42dcbe9ce90def2a4b22989c006bed80d2b4f7725774fac8d543062d955f1598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 21 23:27:45 compute-0 podman[98334]: 2026-01-21 23:27:45.791001173 +0000 UTC m=+0.185170570 container attach 42dcbe9ce90def2a4b22989c006bed80d2b4f7725774fac8d543062d955f1598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pascal, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 23:27:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 21 23:27:45 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 21 23:27:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:27:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:45.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:27:46 compute-0 jovial_pascal[98350]: {
Jan 21 23:27:46 compute-0 jovial_pascal[98350]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:27:46 compute-0 jovial_pascal[98350]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:27:46 compute-0 jovial_pascal[98350]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:27:46 compute-0 jovial_pascal[98350]:         "osd_id": 1,
Jan 21 23:27:46 compute-0 jovial_pascal[98350]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:27:46 compute-0 jovial_pascal[98350]:         "type": "bluestore"
Jan 21 23:27:46 compute-0 jovial_pascal[98350]:     }
Jan 21 23:27:46 compute-0 jovial_pascal[98350]: }
Jan 21 23:27:46 compute-0 systemd[1]: libpod-42dcbe9ce90def2a4b22989c006bed80d2b4f7725774fac8d543062d955f1598.scope: Deactivated successfully.
Jan 21 23:27:46 compute-0 podman[98334]: 2026-01-21 23:27:46.67940577 +0000 UTC m=+1.073575207 container died 42dcbe9ce90def2a4b22989c006bed80d2b4f7725774fac8d543062d955f1598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pascal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Jan 21 23:27:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b9be156c2bd8c1c27977f2e3032096611fdcc37d9e197cb8405f69334a29bcf-merged.mount: Deactivated successfully.
Jan 21 23:27:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:46.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:46 compute-0 podman[98334]: 2026-01-21 23:27:46.752274636 +0000 UTC m=+1.146444043 container remove 42dcbe9ce90def2a4b22989c006bed80d2b4f7725774fac8d543062d955f1598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:27:46 compute-0 systemd[1]: libpod-conmon-42dcbe9ce90def2a4b22989c006bed80d2b4f7725774fac8d543062d955f1598.scope: Deactivated successfully.
Jan 21 23:27:46 compute-0 sudo[98228]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 21 23:27:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:27:46 compute-0 ceph-mon[74318]: 8.13 scrub starts
Jan 21 23:27:46 compute-0 ceph-mon[74318]: 8.13 scrub ok
Jan 21 23:27:46 compute-0 ceph-mon[74318]: pgmap v199: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 43 B/s, 1 objects/s recovering
Jan 21 23:27:46 compute-0 ceph-mon[74318]: osdmap e84: 3 total, 3 up, 3 in
Jan 21 23:27:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 21 23:27:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:46 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 21 23:27:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 85 pg[9.1a( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=5 ec=54/42 lis/c=83/54 les/c/f=84/55/0 sis=85) [1] r=0 lpr=85 pi=[54,85)/1 luod=0'0 crt=49'1136 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 85 pg[9.1a( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=5 ec=54/42 lis/c=83/54 les/c/f=84/55/0 sis=85) [1] r=0 lpr=85 pi=[54,85)/1 crt=49'1136 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:27:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 85 pg[9.a( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=6 ec=54/42 lis/c=83/54 les/c/f=84/55/0 sis=85) [1] r=0 lpr=85 pi=[54,85)/1 luod=0'0 crt=49'1136 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 85 pg[9.a( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=6 ec=54/42 lis/c=83/54 les/c/f=84/55/0 sis=85) [1] r=0 lpr=85 pi=[54,85)/1 crt=49'1136 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:46 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev a7652611-289d-4174-8a7c-a0902e8a0359 does not exist
Jan 21 23:27:46 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 3955a7d2-0868-4c8f-9828-34f703570895 does not exist
Jan 21 23:27:46 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 7d9a733b-31b0-4212-89b6-9a9a917e5000 does not exist
Jan 21 23:27:46 compute-0 sudo[98382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:46 compute-0 sudo[98382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:46 compute-0 sudo[98382]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:47 compute-0 sudo[98407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:27:47 compute-0 sudo[98407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:47 compute-0 sudo[98407]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:27:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:27:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 21 23:27:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:47 compute-0 ceph-mon[74318]: osdmap e85: 3 total, 3 up, 3 in
Jan 21 23:27:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:27:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 21 23:27:47 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 21 23:27:47 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 86 pg[9.a( v 49'1136 (0'0,49'1136] local-lis/les=85/86 n=6 ec=54/42 lis/c=83/54 les/c/f=84/55/0 sis=85) [1] r=0 lpr=85 pi=[54,85)/1 crt=49'1136 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:47 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 86 pg[9.1a( v 49'1136 (0'0,49'1136] local-lis/les=85/86 n=5 ec=54/42 lis/c=83/54 les/c/f=84/55/0 sis=85) [1] r=0 lpr=85 pi=[54,85)/1 crt=49'1136 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:27:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:47.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:27:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:48.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:48 compute-0 ceph-mon[74318]: pgmap v202: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:27:48 compute-0 ceph-mon[74318]: osdmap e86: 3 total, 3 up, 3 in
Jan 21 23:27:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 0 B/s wr, 36 op/s; 46 B/s, 2 objects/s recovering
Jan 21 23:27:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Jan 21 23:27:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 21 23:27:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Jan 21 23:27:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 21 23:27:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 21 23:27:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 21 23:27:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 21 23:27:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 21 23:27:49 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 21 23:27:49 compute-0 ceph-mon[74318]: 4.3 scrub starts
Jan 21 23:27:49 compute-0 ceph-mon[74318]: 4.3 scrub ok
Jan 21 23:27:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 21 23:27:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 21 23:27:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:27:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:49.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:27:49 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 21 23:27:49 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 21 23:27:50 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 87 pg[6.b( v 49'39 (0'0,49'39] local-lis/les=62/63 n=1 ec=52/24 lis/c=62/62 les/c/f=63/63/0 sis=87 pruub=10.857521057s) [0] r=-1 lpr=87 pi=[62,87)/1 crt=49'39 mlcod 49'39 active pruub 172.411529541s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:50 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 87 pg[6.b( v 49'39 (0'0,49'39] local-lis/les=62/63 n=1 ec=52/24 lis/c=62/62 les/c/f=63/63/0 sis=87 pruub=10.857442856s) [0] r=-1 lpr=87 pi=[62,87)/1 crt=49'39 mlcod 0'0 unknown NOTIFY pruub 172.411529541s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:27:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:50.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:27:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 21 23:27:50 compute-0 ceph-mon[74318]: pgmap v204: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 0 B/s wr, 36 op/s; 46 B/s, 2 objects/s recovering
Jan 21 23:27:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 21 23:27:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 21 23:27:50 compute-0 ceph-mon[74318]: osdmap e87: 3 total, 3 up, 3 in
Jan 21 23:27:50 compute-0 ceph-mon[74318]: 7.15 scrub starts
Jan 21 23:27:50 compute-0 ceph-mon[74318]: 7.15 scrub ok
Jan 21 23:27:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 21 23:27:50 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 21 23:27:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 441 B/s wr, 38 op/s; 47 B/s, 2 objects/s recovering
Jan 21 23:27:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Jan 21 23:27:51 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 21 23:27:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Jan 21 23:27:51 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 21 23:27:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:51.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 21 23:27:51 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 21 23:27:51 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 21 23:27:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 21 23:27:51 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 21 23:27:51 compute-0 ceph-mon[74318]: osdmap e88: 3 total, 3 up, 3 in
Jan 21 23:27:51 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 21 23:27:51 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 21 23:27:51 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.17 deep-scrub starts
Jan 21 23:27:52 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.17 deep-scrub ok
Jan 21 23:27:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:27:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:27:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:52.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:27:52 compute-0 ceph-mon[74318]: pgmap v207: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 441 B/s wr, 38 op/s; 47 B/s, 2 objects/s recovering
Jan 21 23:27:52 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 21 23:27:52 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 21 23:27:52 compute-0 ceph-mon[74318]: osdmap e89: 3 total, 3 up, 3 in
Jan 21 23:27:52 compute-0 ceph-mon[74318]: 7.17 deep-scrub starts
Jan 21 23:27:52 compute-0 ceph-mon[74318]: 7.17 deep-scrub ok
Jan 21 23:27:53 compute-0 sshd-session[98435]: Accepted publickey for zuul from 192.168.122.30 port 58416 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:27:53 compute-0 systemd-logind[786]: New session 34 of user zuul.
Jan 21 23:27:53 compute-0 systemd[1]: Started Session 34 of User zuul.
Jan 21 23:27:53 compute-0 sshd-session[98435]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 364 B/s wr, 31 op/s; 39 B/s, 2 objects/s recovering
Jan 21 23:27:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Jan 21 23:27:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 21 23:27:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Jan 21 23:27:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 21 23:27:53 compute-0 sudo[98528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:53 compute-0 sudo[98528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:53 compute-0 sudo[98528]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:53 compute-0 sudo[98578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:27:53 compute-0 sudo[98578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:27:53 compute-0 sudo[98578]: pam_unix(sudo:session): session closed for user root
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:27:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:27:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:53.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 21 23:27:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 21 23:27:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 21 23:27:54 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 21 23:27:54 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 21 23:27:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 21 23:27:54 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 21 23:27:54 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 90 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=68/68 les/c/f=69/69/0 sis=90) [1] r=0 lpr=90 pi=[68,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:54 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 90 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=68/68 les/c/f=69/69/0 sis=90) [1] r=0 lpr=90 pi=[68,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:54 compute-0 python3.9[98639]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:27:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:54.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 21 23:27:55 compute-0 ceph-mon[74318]: pgmap v209: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 364 B/s wr, 31 op/s; 39 B/s, 2 objects/s recovering
Jan 21 23:27:55 compute-0 ceph-mon[74318]: 4.6 scrub starts
Jan 21 23:27:55 compute-0 ceph-mon[74318]: 4.6 scrub ok
Jan 21 23:27:55 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 21 23:27:55 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 21 23:27:55 compute-0 ceph-mon[74318]: osdmap e90: 3 total, 3 up, 3 in
Jan 21 23:27:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 21 23:27:55 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 21 23:27:55 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 91 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=68/68 les/c/f=69/69/0 sis=91) [1]/[2] r=-1 lpr=91 pi=[68,91)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:55 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 91 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=68/68 les/c/f=69/69/0 sis=91) [1]/[2] r=-1 lpr=91 pi=[68,91)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:55 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 91 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=68/68 les/c/f=69/69/0 sis=91) [1]/[2] r=-1 lpr=91 pi=[68,91)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:55 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 91 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=68/68 les/c/f=69/69/0 sis=91) [1]/[2] r=-1 lpr=91 pi=[68,91)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 23:27:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Jan 21 23:27:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 21 23:27:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Jan 21 23:27:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 21 23:27:55 compute-0 sudo[98852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdahqmiglnelzwedjwmzgsmuqnijgzzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038075.4534285-56-218840585516685/AnsiballZ_command.py'
Jan 21 23:27:55 compute-0 sudo[98852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:27:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:55.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 21 23:27:56 compute-0 ceph-mon[74318]: 4.2 scrub starts
Jan 21 23:27:56 compute-0 ceph-mon[74318]: 4.2 scrub ok
Jan 21 23:27:56 compute-0 ceph-mon[74318]: osdmap e91: 3 total, 3 up, 3 in
Jan 21 23:27:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 21 23:27:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 21 23:27:56 compute-0 python3.9[98854]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:27:56 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 21 23:27:56 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 21 23:27:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 21 23:27:56 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 21 23:27:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=52/24 lis/c=70/70 les/c/f=71/71/0 sis=92) [1] r=0 lpr=92 pi=[70,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:56.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 21 23:27:57 compute-0 ceph-mon[74318]: pgmap v212: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 23:27:57 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 21 23:27:57 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 21 23:27:57 compute-0 ceph-mon[74318]: osdmap e92: 3 total, 3 up, 3 in
Jan 21 23:27:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 21 23:27:57 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 21 23:27:57 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 21 23:27:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 93 pg[9.1d( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=5 ec=54/42 lis/c=91/68 les/c/f=92/69/0 sis=93) [1] r=0 lpr=93 pi=[68,93)/1 luod=0'0 crt=49'1136 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 93 pg[9.1d( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=5 ec=54/42 lis/c=91/68 les/c/f=92/69/0 sis=93) [1] r=0 lpr=93 pi=[68,93)/1 crt=49'1136 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 93 pg[9.d( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=6 ec=54/42 lis/c=91/68 les/c/f=92/69/0 sis=93) [1] r=0 lpr=93 pi=[68,93)/1 luod=0'0 crt=49'1136 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 93 pg[9.d( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=6 ec=54/42 lis/c=91/68 les/c/f=92/69/0 sis=93) [1] r=0 lpr=93 pi=[68,93)/1 crt=49'1136 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:57 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 21 23:27:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 93 pg[6.e( v 49'39 lc 46'17 (0'0,49'39] local-lis/les=92/93 n=1 ec=52/24 lis/c=70/70 les/c/f=71/71/0 sis=92) [1] r=0 lpr=92 pi=[70,92)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:27:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 2 active+remapped, 303 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 21 23:27:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Jan 21 23:27:57 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 21 23:27:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Jan 21 23:27:57 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 21 23:27:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:27:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:57.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:27:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 21 23:27:58 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 21 23:27:58 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 21 23:27:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 21 23:27:58 compute-0 ceph-mon[74318]: 7.19 scrub starts
Jan 21 23:27:58 compute-0 ceph-mon[74318]: osdmap e93: 3 total, 3 up, 3 in
Jan 21 23:27:58 compute-0 ceph-mon[74318]: 7.19 scrub ok
Jan 21 23:27:58 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 21 23:27:58 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 21 23:27:58 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 21 23:27:58 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 94 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=66/66 les/c/f=67/67/0 sis=94) [1] r=0 lpr=94 pi=[66,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:58 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 94 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=66/66 les/c/f=67/67/0 sis=94) [1] r=0 lpr=94 pi=[66,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:27:58 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 94 pg[6.f( v 49'39 (0'0,49'39] local-lis/les=62/63 n=1 ec=52/24 lis/c=62/62 les/c/f=63/63/0 sis=94 pruub=11.301777840s) [0] r=-1 lpr=94 pi=[62,94)/1 crt=49'39 mlcod 49'39 active pruub 180.411758423s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:58 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 94 pg[6.f( v 49'39 (0'0,49'39] local-lis/les=62/63 n=1 ec=52/24 lis/c=62/62 les/c/f=63/63/0 sis=94 pruub=11.301743507s) [0] r=-1 lpr=94 pi=[62,94)/1 crt=49'39 mlcod 0'0 unknown NOTIFY pruub 180.411758423s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:58 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 94 pg[9.1d( v 49'1136 (0'0,49'1136] local-lis/les=93/94 n=5 ec=54/42 lis/c=91/68 les/c/f=92/69/0 sis=93) [1] r=0 lpr=93 pi=[68,93)/1 crt=49'1136 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:58 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 94 pg[9.d( v 49'1136 (0'0,49'1136] local-lis/les=93/94 n=6 ec=54/42 lis/c=91/68 les/c/f=92/69/0 sis=93) [1] r=0 lpr=93 pi=[68,93)/1 crt=49'1136 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:27:58 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 21 23:27:58 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 21 23:27:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:27:58.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:27:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 21 23:27:59 compute-0 ceph-mon[74318]: pgmap v215: 305 pgs: 2 active+remapped, 303 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 21 23:27:59 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 21 23:27:59 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 21 23:27:59 compute-0 ceph-mon[74318]: osdmap e94: 3 total, 3 up, 3 in
Jan 21 23:27:59 compute-0 ceph-mon[74318]: 7.1a scrub starts
Jan 21 23:27:59 compute-0 ceph-mon[74318]: 7.1a scrub ok
Jan 21 23:27:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 21 23:27:59 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 21 23:27:59 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 95 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=66/66 les/c/f=67/67/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[66,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:59 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 95 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=66/66 les/c/f=67/67/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[66,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:59 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 95 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=66/66 les/c/f=67/67/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[66,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:27:59 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 95 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=66/66 les/c/f=67/67/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[66,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 23:27:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Jan 21 23:27:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:27:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:27:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:27:59.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:00 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 21 23:28:00 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 21 23:28:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 21 23:28:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 21 23:28:00 compute-0 ceph-mon[74318]: osdmap e95: 3 total, 3 up, 3 in
Jan 21 23:28:00 compute-0 ceph-mon[74318]: 9.1 scrub starts
Jan 21 23:28:00 compute-0 ceph-mon[74318]: 9.1 scrub ok
Jan 21 23:28:00 compute-0 ceph-mon[74318]: pgmap v218: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Jan 21 23:28:00 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 21 23:28:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:00.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 21 23:28:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 21 23:28:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 97 pg[9.f( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=6 ec=54/42 lis/c=95/66 les/c/f=96/67/0 sis=97) [1] r=0 lpr=97 pi=[66,97)/1 luod=0'0 crt=49'1136 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 97 pg[9.f( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=6 ec=54/42 lis/c=95/66 les/c/f=96/67/0 sis=97) [1] r=0 lpr=97 pi=[66,97)/1 crt=49'1136 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:28:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 97 pg[9.1f( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=5 ec=54/42 lis/c=95/66 les/c/f=96/67/0 sis=97) [1] r=0 lpr=97 pi=[66,97)/1 luod=0'0 crt=49'1136 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 97 pg[9.1f( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=5 ec=54/42 lis/c=95/66 les/c/f=96/67/0 sis=97) [1] r=0 lpr=97 pi=[66,97)/1 crt=49'1136 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:28:01 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 21 23:28:01 compute-0 ceph-mon[74318]: 7.1c scrub starts
Jan 21 23:28:01 compute-0 ceph-mon[74318]: 7.1c scrub ok
Jan 21 23:28:01 compute-0 ceph-mon[74318]: osdmap e96: 3 total, 3 up, 3 in
Jan 21 23:28:01 compute-0 ceph-mon[74318]: 2.1b scrub starts
Jan 21 23:28:01 compute-0 ceph-mon[74318]: 2.1b scrub ok
Jan 21 23:28:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 95 B/s, 0 objects/s recovering
Jan 21 23:28:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:28:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:01.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:28:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:28:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 21 23:28:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 21 23:28:02 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 21 23:28:02 compute-0 ceph-mon[74318]: osdmap e97: 3 total, 3 up, 3 in
Jan 21 23:28:02 compute-0 ceph-mon[74318]: 8.1a scrub starts
Jan 21 23:28:02 compute-0 ceph-mon[74318]: 8.1a scrub ok
Jan 21 23:28:02 compute-0 ceph-mon[74318]: pgmap v221: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 95 B/s, 0 objects/s recovering
Jan 21 23:28:02 compute-0 ceph-mon[74318]: 4.1d scrub starts
Jan 21 23:28:02 compute-0 ceph-mon[74318]: 4.1d scrub ok
Jan 21 23:28:02 compute-0 ceph-mon[74318]: osdmap e98: 3 total, 3 up, 3 in
Jan 21 23:28:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 98 pg[9.1f( v 49'1136 (0'0,49'1136] local-lis/les=97/98 n=5 ec=54/42 lis/c=95/66 les/c/f=96/67/0 sis=97) [1] r=0 lpr=97 pi=[66,97)/1 crt=49'1136 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:28:02 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 98 pg[9.f( v 49'1136 (0'0,49'1136] local-lis/les=97/98 n=6 ec=54/42 lis/c=95/66 les/c/f=96/67/0 sis=97) [1] r=0 lpr=97 pi=[66,97)/1 crt=49'1136 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:28:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:02.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:03 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 21 23:28:03 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 21 23:28:03 compute-0 sudo[98852]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 142 B/s, 0 objects/s recovering
Jan 21 23:28:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:28:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:03.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:28:04 compute-0 ceph-mon[74318]: 10.6 scrub starts
Jan 21 23:28:04 compute-0 ceph-mon[74318]: 10.6 scrub ok
Jan 21 23:28:04 compute-0 ceph-mon[74318]: pgmap v223: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 142 B/s, 0 objects/s recovering
Jan 21 23:28:04 compute-0 sshd-session[98438]: Connection closed by 192.168.122.30 port 58416
Jan 21 23:28:04 compute-0 sshd-session[98435]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:28:04 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Jan 21 23:28:04 compute-0 systemd[1]: session-34.scope: Consumed 8.714s CPU time.
Jan 21 23:28:04 compute-0 systemd-logind[786]: Session 34 logged out. Waiting for processes to exit.
Jan 21 23:28:04 compute-0 systemd-logind[786]: Removed session 34.
Jan 21 23:28:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:04.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:05 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 21 23:28:05 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 21 23:28:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 305 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 14 op/s; 173 B/s, 4 objects/s recovering
Jan 21 23:28:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Jan 21 23:28:05 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 21 23:28:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 21 23:28:05 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 21 23:28:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 21 23:28:05 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 21 23:28:05 compute-0 ceph-mon[74318]: 8.1d scrub starts
Jan 21 23:28:05 compute-0 ceph-mon[74318]: 8.1d scrub ok
Jan 21 23:28:05 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 21 23:28:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:05.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:06 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 21 23:28:06 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 21 23:28:06 compute-0 ceph-mon[74318]: 10.7 scrub starts
Jan 21 23:28:06 compute-0 ceph-mon[74318]: 10.7 scrub ok
Jan 21 23:28:06 compute-0 ceph-mon[74318]: 8.1e scrub starts
Jan 21 23:28:06 compute-0 ceph-mon[74318]: 8.1e scrub ok
Jan 21 23:28:06 compute-0 ceph-mon[74318]: pgmap v224: 305 pgs: 305 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 14 op/s; 173 B/s, 4 objects/s recovering
Jan 21 23:28:06 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 21 23:28:06 compute-0 ceph-mon[74318]: osdmap e99: 3 total, 3 up, 3 in
Jan 21 23:28:06 compute-0 ceph-mon[74318]: 10.9 scrub starts
Jan 21 23:28:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:06.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:07 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 99 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=99) [1] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:28:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:28:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 21 23:28:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 21 23:28:07 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 21 23:28:07 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 100 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:07 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 100 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 23:28:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 341 B/s wr, 29 op/s; 73 B/s, 3 objects/s recovering
Jan 21 23:28:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Jan 21 23:28:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 21 23:28:07 compute-0 ceph-mon[74318]: 10.9 scrub ok
Jan 21 23:28:07 compute-0 ceph-mon[74318]: 3.1b scrub starts
Jan 21 23:28:07 compute-0 ceph-mon[74318]: 3.1b scrub ok
Jan 21 23:28:07 compute-0 ceph-mon[74318]: osdmap e100: 3 total, 3 up, 3 in
Jan 21 23:28:07 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 21 23:28:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:07.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 21 23:28:08 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 21 23:28:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 21 23:28:08 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 21 23:28:08 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 101 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=101) [1] r=0 lpr=101 pi=[54,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:28:08 compute-0 ceph-mon[74318]: pgmap v227: 305 pgs: 305 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 341 B/s wr, 29 op/s; 73 B/s, 3 objects/s recovering
Jan 21 23:28:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 21 23:28:08 compute-0 ceph-mon[74318]: osdmap e101: 3 total, 3 up, 3 in
Jan 21 23:28:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:28:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:08.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:28:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:28:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:28:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:28:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:28:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:28:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:28:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 21 23:28:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 21 23:28:09 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 21 23:28:09 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 102 pg[9.10( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=6 ec=54/42 lis/c=100/54 les/c/f=101/55/0 sis=102) [1] r=0 lpr=102 pi=[54,102)/1 luod=0'0 crt=49'1136 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:09 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 102 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=102) [1]/[0] r=-1 lpr=102 pi=[54,102)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:09 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 102 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=102) [1]/[0] r=-1 lpr=102 pi=[54,102)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 23:28:09 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 102 pg[9.10( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=6 ec=54/42 lis/c=100/54 les/c/f=101/55/0 sis=102) [1] r=0 lpr=102 pi=[54,102)/1 crt=49'1136 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:28:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:09 compute-0 ceph-mon[74318]: 9.2 scrub starts
Jan 21 23:28:09 compute-0 ceph-mon[74318]: 9.2 scrub ok
Jan 21 23:28:09 compute-0 ceph-mon[74318]: 4.1c scrub starts
Jan 21 23:28:09 compute-0 ceph-mon[74318]: 4.1c scrub ok
Jan 21 23:28:09 compute-0 ceph-mon[74318]: osdmap e102: 3 total, 3 up, 3 in
Jan 21 23:28:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:28:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:09.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:28:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 21 23:28:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 21 23:28:10 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 21 23:28:10 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 103 pg[9.10( v 49'1136 (0'0,49'1136] local-lis/les=102/103 n=6 ec=54/42 lis/c=100/54 les/c/f=101/55/0 sis=102) [1] r=0 lpr=102 pi=[54,102)/1 crt=49'1136 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:28:10 compute-0 ceph-mon[74318]: 9.4 scrub starts
Jan 21 23:28:10 compute-0 ceph-mon[74318]: 9.4 scrub ok
Jan 21 23:28:10 compute-0 ceph-mon[74318]: pgmap v230: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:10 compute-0 ceph-mon[74318]: osdmap e103: 3 total, 3 up, 3 in
Jan 21 23:28:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:10.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 21 23:28:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 21 23:28:11 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 21 23:28:11 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 104 pg[9.11( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=6 ec=54/42 lis/c=102/54 les/c/f=103/55/0 sis=104) [1] r=0 lpr=104 pi=[54,104)/1 luod=0'0 crt=49'1136 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:11 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 104 pg[9.11( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=6 ec=54/42 lis/c=102/54 les/c/f=103/55/0 sis=104) [1] r=0 lpr=104 pi=[54,104)/1 crt=49'1136 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:28:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:11 compute-0 ceph-mon[74318]: 4.19 scrub starts
Jan 21 23:28:11 compute-0 ceph-mon[74318]: 4.19 scrub ok
Jan 21 23:28:11 compute-0 ceph-mon[74318]: osdmap e104: 3 total, 3 up, 3 in
Jan 21 23:28:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:28:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:11.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:28:12 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 21 23:28:12 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 21 23:28:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:28:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 21 23:28:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 21 23:28:12 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 21 23:28:12 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 105 pg[9.11( v 49'1136 (0'0,49'1136] local-lis/les=104/105 n=6 ec=54/42 lis/c=102/54 les/c/f=103/55/0 sis=104) [1] r=0 lpr=104 pi=[54,104)/1 crt=49'1136 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:28:12 compute-0 ceph-mon[74318]: pgmap v233: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:12 compute-0 ceph-mon[74318]: osdmap e105: 3 total, 3 up, 3 in
Jan 21 23:28:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:12.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:13 compute-0 ceph-mon[74318]: 10.a scrub starts
Jan 21 23:28:13 compute-0 ceph-mon[74318]: 10.a scrub ok
Jan 21 23:28:13 compute-0 sudo[98920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:28:13 compute-0 sudo[98920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:13 compute-0 sudo[98920]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:13 compute-0 sudo[98945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:28:13 compute-0 sudo[98945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:13 compute-0 sudo[98945]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:28:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:13.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:28:14 compute-0 ceph-mon[74318]: pgmap v235: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:28:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:14.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:28:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Jan 21 23:28:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Jan 21 23:28:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 21 23:28:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 21 23:28:15 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 21 23:28:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 21 23:28:15 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 21 23:28:15 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 106 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=106) [1] r=0 lpr=106 pi=[54,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:28:15 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 21 23:28:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:15.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 21 23:28:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 21 23:28:16 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 21 23:28:16 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 107 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=107) [1]/[0] r=-1 lpr=107 pi=[54,107)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:16 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 107 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=107) [1]/[0] r=-1 lpr=107 pi=[54,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 23:28:16 compute-0 ceph-mon[74318]: pgmap v236: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Jan 21 23:28:16 compute-0 ceph-mon[74318]: 5.4 scrub starts
Jan 21 23:28:16 compute-0 ceph-mon[74318]: 5.4 scrub ok
Jan 21 23:28:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 21 23:28:16 compute-0 ceph-mon[74318]: osdmap e106: 3 total, 3 up, 3 in
Jan 21 23:28:16 compute-0 ceph-mon[74318]: osdmap e107: 3 total, 3 up, 3 in
Jan 21 23:28:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:16.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:17 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 21 23:28:17 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 21 23:28:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:28:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 170 B/s wr, 14 op/s; 18 B/s, 1 objects/s recovering
Jan 21 23:28:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Jan 21 23:28:17 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 21 23:28:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 21 23:28:17 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 21 23:28:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 21 23:28:17 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 21 23:28:17 compute-0 ceph-mon[74318]: 5.8 deep-scrub starts
Jan 21 23:28:17 compute-0 ceph-mon[74318]: 5.8 deep-scrub ok
Jan 21 23:28:17 compute-0 ceph-mon[74318]: 10.b scrub starts
Jan 21 23:28:17 compute-0 ceph-mon[74318]: 10.b scrub ok
Jan 21 23:28:17 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 21 23:28:17 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 21 23:28:17 compute-0 ceph-mon[74318]: osdmap e108: 3 total, 3 up, 3 in
Jan 21 23:28:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:17.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 21 23:28:18 compute-0 ceph-mon[74318]: pgmap v239: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 170 B/s wr, 14 op/s; 18 B/s, 1 objects/s recovering
Jan 21 23:28:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:18.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 21 23:28:18 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 21 23:28:18 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 109 pg[9.12( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=5 ec=54/42 lis/c=107/54 les/c/f=108/55/0 sis=109) [1] r=0 lpr=109 pi=[54,109)/1 luod=0'0 crt=49'1136 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:18 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 109 pg[9.12( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=5 ec=54/42 lis/c=107/54 les/c/f=108/55/0 sis=109) [1] r=0 lpr=109 pi=[54,109)/1 crt=49'1136 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:28:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Jan 21 23:28:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 21 23:28:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 21 23:28:19 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 21 23:28:19 compute-0 ceph-mon[74318]: 5.b scrub starts
Jan 21 23:28:19 compute-0 ceph-mon[74318]: 5.b scrub ok
Jan 21 23:28:19 compute-0 ceph-mon[74318]: osdmap e109: 3 total, 3 up, 3 in
Jan 21 23:28:19 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 110 pg[9.12( v 49'1136 (0'0,49'1136] local-lis/les=109/110 n=5 ec=54/42 lis/c=107/54 les/c/f=108/55/0 sis=109) [1] r=0 lpr=109 pi=[54,109)/1 crt=49'1136 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:28:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:19.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:20 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 21 23:28:20 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 21 23:28:20 compute-0 sshd-session[98973]: Accepted publickey for zuul from 192.168.122.30 port 52826 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:28:20 compute-0 systemd-logind[786]: New session 35 of user zuul.
Jan 21 23:28:20 compute-0 systemd[1]: Started Session 35 of User zuul.
Jan 21 23:28:20 compute-0 sshd-session[98973]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:28:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:20.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:20 compute-0 ceph-mon[74318]: pgmap v242: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Jan 21 23:28:20 compute-0 ceph-mon[74318]: 5.d scrub starts
Jan 21 23:28:20 compute-0 ceph-mon[74318]: 5.d scrub ok
Jan 21 23:28:20 compute-0 ceph-mon[74318]: osdmap e110: 3 total, 3 up, 3 in
Jan 21 23:28:20 compute-0 ceph-mon[74318]: 10.c scrub starts
Jan 21 23:28:20 compute-0 ceph-mon[74318]: 10.c scrub ok
Jan 21 23:28:21 compute-0 python3.9[99126]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 21 23:28:21 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 21 23:28:21 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 21 23:28:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 23 B/s, 0 objects/s recovering
Jan 21 23:28:21 compute-0 ceph-mon[74318]: 10.d scrub starts
Jan 21 23:28:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:21.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:28:22 compute-0 python3.9[99301]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:28:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:22.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:22 compute-0 ceph-mon[74318]: 10.d scrub ok
Jan 21 23:28:22 compute-0 ceph-mon[74318]: pgmap v244: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 23 B/s, 0 objects/s recovering
Jan 21 23:28:23 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 21 23:28:23 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 21 23:28:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Jan 21 23:28:23 compute-0 sudo[99456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayhgqcnthqizxldwtxpiihvmuzhuqlic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038103.0960474-93-178004541928907/AnsiballZ_command.py'
Jan 21 23:28:23 compute-0 sudo[99456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:28:23 compute-0 python3.9[99458]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:28:23 compute-0 sudo[99456]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:23 compute-0 ceph-mon[74318]: 9.c scrub starts
Jan 21 23:28:23 compute-0 ceph-mon[74318]: 9.c scrub ok
Jan 21 23:28:23 compute-0 ceph-mon[74318]: 10.e scrub starts
Jan 21 23:28:23 compute-0 ceph-mon[74318]: 10.e scrub ok
Jan 21 23:28:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:23.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:24 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 21 23:28:24 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 21 23:28:24 compute-0 sudo[99609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtbhruvnyiafzblabupxzbqhsqjegaiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038104.203576-129-222238734029497/AnsiballZ_stat.py'
Jan 21 23:28:24 compute-0 sudo[99609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:28:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:24.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:24 compute-0 ceph-mon[74318]: pgmap v245: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Jan 21 23:28:24 compute-0 ceph-mon[74318]: 10.16 scrub starts
Jan 21 23:28:24 compute-0 ceph-mon[74318]: 10.16 scrub ok
Jan 21 23:28:24 compute-0 python3.9[99611]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:28:24 compute-0 sudo[99609]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 14 B/s, 0 objects/s recovering
Jan 21 23:28:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Jan 21 23:28:25 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 21 23:28:25 compute-0 sudo[99764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xplctmdmkwhppkvyegcifzkclcimszzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038105.2894063-162-155436873050756/AnsiballZ_file.py'
Jan 21 23:28:25 compute-0 sudo[99764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:28:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 21 23:28:25 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 21 23:28:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 21 23:28:25 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 21 23:28:25 compute-0 ceph-mon[74318]: 5.e scrub starts
Jan 21 23:28:25 compute-0 ceph-mon[74318]: 5.e scrub ok
Jan 21 23:28:25 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 21 23:28:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:28:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:25.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:28:25 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 21 23:28:26 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 21 23:28:26 compute-0 python3.9[99766]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:28:26 compute-0 sudo[99764]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:26 compute-0 sudo[99916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzorlegyzboelgzgyqjospqoiixjftfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038106.3332431-189-26553661715963/AnsiballZ_file.py'
Jan 21 23:28:26 compute-0 sudo[99916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:28:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:28:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:26.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:28:26 compute-0 python3.9[99918]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:28:26 compute-0 sudo[99916]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:26 compute-0 ceph-mon[74318]: 9.14 scrub starts
Jan 21 23:28:26 compute-0 ceph-mon[74318]: 9.14 scrub ok
Jan 21 23:28:26 compute-0 ceph-mon[74318]: pgmap v246: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 14 B/s, 0 objects/s recovering
Jan 21 23:28:26 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 21 23:28:26 compute-0 ceph-mon[74318]: osdmap e111: 3 total, 3 up, 3 in
Jan 21 23:28:26 compute-0 ceph-mon[74318]: 10.17 scrub starts
Jan 21 23:28:26 compute-0 ceph-mon[74318]: 10.17 scrub ok
Jan 21 23:28:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:28:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Jan 21 23:28:27 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 21 23:28:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 21 23:28:27 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 21 23:28:27 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 21 23:28:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 21 23:28:27 compute-0 python3.9[100069]: ansible-ansible.builtin.service_facts Invoked
Jan 21 23:28:27 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 21 23:28:27 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 112 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=68/68 les/c/f=69/69/0 sis=112) [1] r=0 lpr=112 pi=[68,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:28:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:27.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:28 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 21 23:28:28 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 21 23:28:28 compute-0 network[100086]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 23:28:28 compute-0 network[100087]: 'network-scripts' will be removed from distribution in near future.
Jan 21 23:28:28 compute-0 network[100088]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 23:28:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:28.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 21 23:28:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 21 23:28:28 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 21 23:28:28 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 113 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=68/68 les/c/f=69/69/0 sis=113) [1]/[2] r=-1 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:28 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 113 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=68/68 les/c/f=69/69/0 sis=113) [1]/[2] r=-1 lpr=113 pi=[68,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 23:28:28 compute-0 ceph-mon[74318]: 9.1c deep-scrub starts
Jan 21 23:28:28 compute-0 ceph-mon[74318]: 9.1c deep-scrub ok
Jan 21 23:28:28 compute-0 ceph-mon[74318]: pgmap v248: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:28 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 21 23:28:28 compute-0 ceph-mon[74318]: osdmap e112: 3 total, 3 up, 3 in
Jan 21 23:28:28 compute-0 ceph-mon[74318]: 10.1a scrub starts
Jan 21 23:28:28 compute-0 ceph-mon[74318]: 10.1a scrub ok
Jan 21 23:28:29 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 21 23:28:29 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 21 23:28:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 21 23:28:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:28:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:29.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:28:29 compute-0 ceph-mon[74318]: osdmap e113: 3 total, 3 up, 3 in
Jan 21 23:28:29 compute-0 ceph-mon[74318]: 10.1c scrub starts
Jan 21 23:28:29 compute-0 ceph-mon[74318]: 10.1c scrub ok
Jan 21 23:28:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 21 23:28:30 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 21 23:28:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:30.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 21 23:28:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 21 23:28:31 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 21 23:28:31 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 115 pg[9.15( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=5 ec=54/42 lis/c=113/68 les/c/f=114/69/0 sis=115) [1] r=0 lpr=115 pi=[68,115)/1 luod=0'0 crt=49'1136 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:31 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 115 pg[9.15( v 49'1136 (0'0,49'1136] local-lis/les=0/0 n=5 ec=54/42 lis/c=113/68 les/c/f=114/69/0 sis=115) [1] r=0 lpr=115 pi=[68,115)/1 crt=49'1136 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 23:28:31 compute-0 ceph-mon[74318]: pgmap v251: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:31 compute-0 ceph-mon[74318]: osdmap e114: 3 total, 3 up, 3 in
Jan 21 23:28:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:28:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:31.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:28:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 21 23:28:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 21 23:28:32 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 21 23:28:32 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 116 pg[9.15( v 49'1136 (0'0,49'1136] local-lis/les=115/116 n=5 ec=54/42 lis/c=113/68 les/c/f=114/69/0 sis=115) [1] r=0 lpr=115 pi=[68,115)/1 crt=49'1136 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:28:32 compute-0 ceph-mon[74318]: osdmap e115: 3 total, 3 up, 3 in
Jan 21 23:28:32 compute-0 ceph-mon[74318]: osdmap e116: 3 total, 3 up, 3 in
Jan 21 23:28:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:28:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:28:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:32.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:28:33 compute-0 ceph-mon[74318]: pgmap v254: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:28:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:33.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:28:34 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 21 23:28:34 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 21 23:28:34 compute-0 sudo[100226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:28:34 compute-0 sudo[100226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:34 compute-0 sudo[100226]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:34 compute-0 sudo[100251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:28:34 compute-0 sudo[100251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:34 compute-0 sudo[100251]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:34 compute-0 ceph-mon[74318]: 5.12 scrub starts
Jan 21 23:28:34 compute-0 ceph-mon[74318]: 5.12 scrub ok
Jan 21 23:28:34 compute-0 ceph-mon[74318]: 10.1d scrub starts
Jan 21 23:28:34 compute-0 ceph-mon[74318]: 10.1d scrub ok
Jan 21 23:28:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:28:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:34.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:28:34 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 21 23:28:34 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 21 23:28:35 compute-0 python3.9[100401]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:28:35 compute-0 ceph-mon[74318]: pgmap v256: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:35 compute-0 ceph-mon[74318]: 10.1f scrub starts
Jan 21 23:28:35 compute-0 ceph-mon[74318]: 10.1f scrub ok
Jan 21 23:28:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Jan 21 23:28:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Jan 21 23:28:35 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 21 23:28:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:28:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:35.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:28:36 compute-0 python3.9[100552]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:28:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 21 23:28:36 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 21 23:28:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 21 23:28:36 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 21 23:28:36 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 21 23:28:36 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 117 pg[9.16( v 49'1136 (0'0,49'1136] local-lis/les=73/74 n=5 ec=54/42 lis/c=73/73 les/c/f=74/74/0 sis=117 pruub=10.883838654s) [2] r=-1 lpr=117 pi=[73,117)/1 crt=49'1136 mlcod 0'0 active pruub 218.580612183s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:36 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 117 pg[9.16( v 49'1136 (0'0,49'1136] local-lis/les=73/74 n=5 ec=54/42 lis/c=73/73 les/c/f=74/74/0 sis=117 pruub=10.883004189s) [2] r=-1 lpr=117 pi=[73,117)/1 crt=49'1136 mlcod 0'0 unknown NOTIFY pruub 218.580612183s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:28:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:36.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 21 23:28:37 compute-0 ceph-mon[74318]: pgmap v257: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Jan 21 23:28:37 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 21 23:28:37 compute-0 ceph-mon[74318]: osdmap e117: 3 total, 3 up, 3 in
Jan 21 23:28:37 compute-0 ceph-mon[74318]: 5.13 scrub starts
Jan 21 23:28:37 compute-0 ceph-mon[74318]: 5.13 scrub ok
Jan 21 23:28:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 21 23:28:37 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 21 23:28:37 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 118 pg[9.16( v 49'1136 (0'0,49'1136] local-lis/les=73/74 n=5 ec=54/42 lis/c=73/73 les/c/f=74/74/0 sis=118) [2]/[1] r=0 lpr=118 pi=[73,118)/1 crt=49'1136 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:37 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 118 pg[9.16( v 49'1136 (0'0,49'1136] local-lis/les=73/74 n=5 ec=54/42 lis/c=73/73 les/c/f=74/74/0 sis=118) [2]/[1] r=0 lpr=118 pi=[73,118)/1 crt=49'1136 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 23:28:37 compute-0 python3.9[100707]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:28:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 170 B/s wr, 14 op/s; 36 B/s, 1 objects/s recovering
Jan 21 23:28:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Jan 21 23:28:37 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 21 23:28:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:38.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 21 23:28:38 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 21 23:28:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 21 23:28:38 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 21 23:28:38 compute-0 ceph-mon[74318]: osdmap e118: 3 total, 3 up, 3 in
Jan 21 23:28:38 compute-0 ceph-mon[74318]: pgmap v260: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 170 B/s wr, 14 op/s; 36 B/s, 1 objects/s recovering
Jan 21 23:28:38 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 21 23:28:38 compute-0 ceph-mon[74318]: 5.1a scrub starts
Jan 21 23:28:38 compute-0 ceph-mon[74318]: 5.1a scrub ok
Jan 21 23:28:38 compute-0 ceph-mon[74318]: 11.2 scrub starts
Jan 21 23:28:38 compute-0 ceph-mon[74318]: 11.2 scrub ok
Jan 21 23:28:38 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 119 pg[9.16( v 49'1136 (0'0,49'1136] local-lis/les=118/119 n=5 ec=54/42 lis/c=73/73 les/c/f=74/74/0 sis=118) [2]/[1] async=[2] r=0 lpr=118 pi=[73,118)/1 crt=49'1136 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:28:38 compute-0 sudo[100863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffohxfdodfiupekrmvfiuntbdvevpogo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038118.1210687-333-103800555517693/AnsiballZ_setup.py'
Jan 21 23:28:38 compute-0 sudo[100863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:28:38 compute-0 python3.9[100865]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:28:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:28:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:38.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:28:38 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 21 23:28:38 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 21 23:28:39 compute-0 sudo[100863]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:28:39
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'default.rgw.control', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'vms']
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:28:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 21 23:28:39 compute-0 sudo[100948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bapijqokoacjrfoiwodfrppslddxqlwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038118.1210687-333-103800555517693/AnsiballZ_dnf.py'
Jan 21 23:28:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 21 23:28:39 compute-0 ceph-mon[74318]: osdmap e119: 3 total, 3 up, 3 in
Jan 21 23:28:39 compute-0 ceph-mon[74318]: 8.14 scrub starts
Jan 21 23:28:39 compute-0 ceph-mon[74318]: 8.14 scrub ok
Jan 21 23:28:39 compute-0 sudo[100948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:28:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 21 23:28:39 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 21 23:28:39 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 120 pg[9.16( v 49'1136 (0'0,49'1136] local-lis/les=118/119 n=5 ec=54/42 lis/c=118/73 les/c/f=119/74/0 sis=120 pruub=14.959695816s) [2] async=[2] r=-1 lpr=120 pi=[73,120)/1 crt=49'1136 mlcod 49'1136 active pruub 225.406555176s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:39 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 120 pg[9.16( v 49'1136 (0'0,49'1136] local-lis/les=118/119 n=5 ec=54/42 lis/c=118/73 les/c/f=119/74/0 sis=120 pruub=14.959373474s) [2] r=-1 lpr=120 pi=[73,120)/1 crt=49'1136 mlcod 0'0 unknown NOTIFY pruub 225.406555176s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:28:39 compute-0 python3.9[100950]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:28:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 1 active+remapped, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Jan 21 23:28:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Jan 21 23:28:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 21 23:28:39 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 21 23:28:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:28:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:40.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:28:40 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 21 23:28:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 21 23:28:40 compute-0 ceph-mon[74318]: osdmap e120: 3 total, 3 up, 3 in
Jan 21 23:28:40 compute-0 ceph-mon[74318]: pgmap v263: 305 pgs: 1 active+remapped, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Jan 21 23:28:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 21 23:28:40 compute-0 ceph-mon[74318]: 11.14 scrub starts
Jan 21 23:28:40 compute-0 ceph-mon[74318]: 11.14 scrub ok
Jan 21 23:28:40 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 21 23:28:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 21 23:28:40 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 21 23:28:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:40.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:40 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 21 23:28:40 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 21 23:28:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 21 23:28:41 compute-0 ceph-mon[74318]: osdmap e121: 3 total, 3 up, 3 in
Jan 21 23:28:41 compute-0 ceph-mon[74318]: 8.9 scrub starts
Jan 21 23:28:41 compute-0 ceph-mon[74318]: 8.9 scrub ok
Jan 21 23:28:41 compute-0 ceph-mon[74318]: 8.17 scrub starts
Jan 21 23:28:41 compute-0 ceph-mon[74318]: 8.17 scrub ok
Jan 21 23:28:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 25 B/s, 0 objects/s recovering
Jan 21 23:28:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Jan 21 23:28:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 21 23:28:42 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 21 23:28:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:42.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:42 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 21 23:28:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:28:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 21 23:28:42 compute-0 ceph-mon[74318]: pgmap v265: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 25 B/s, 0 objects/s recovering
Jan 21 23:28:42 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 21 23:28:42 compute-0 ceph-mon[74318]: 11.1 scrub starts
Jan 21 23:28:42 compute-0 ceph-mon[74318]: 11.1 scrub ok
Jan 21 23:28:42 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 21 23:28:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 21 23:28:42 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 21 23:28:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:28:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:42.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:28:42 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 21 23:28:42 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 21 23:28:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 21 23:28:43 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 21 23:28:43 compute-0 ceph-mon[74318]: osdmap e122: 3 total, 3 up, 3 in
Jan 21 23:28:43 compute-0 ceph-mon[74318]: 11.5 scrub starts
Jan 21 23:28:43 compute-0 ceph-mon[74318]: 11.5 scrub ok
Jan 21 23:28:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 21 23:28:43 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 21 23:28:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Jan 21 23:28:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 21 23:28:43 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 21 23:28:43 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 21 23:28:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:44.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 21 23:28:44 compute-0 ceph-mon[74318]: osdmap e123: 3 total, 3 up, 3 in
Jan 21 23:28:44 compute-0 ceph-mon[74318]: 8.1c deep-scrub starts
Jan 21 23:28:44 compute-0 ceph-mon[74318]: 8.1c deep-scrub ok
Jan 21 23:28:44 compute-0 ceph-mon[74318]: pgmap v268: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 21 23:28:44 compute-0 ceph-mon[74318]: 11.4 scrub starts
Jan 21 23:28:44 compute-0 ceph-mon[74318]: 11.4 scrub ok
Jan 21 23:28:44 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 21 23:28:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 21 23:28:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:44.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:44 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 21 23:28:44 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 21 23:28:45 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 21 23:28:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Jan 21 23:28:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 21 23:28:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 21 23:28:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 21 23:28:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 21 23:28:45 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 21 23:28:45 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 124 pg[9.1a( v 49'1136 (0'0,49'1136] local-lis/les=85/86 n=5 ec=54/42 lis/c=85/85 les/c/f=86/86/0 sis=124 pruub=14.051030159s) [0] r=-1 lpr=124 pi=[85,124)/1 crt=49'1136 mlcod 0'0 active pruub 230.828186035s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:45 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 125 pg[9.1a( v 49'1136 (0'0,49'1136] local-lis/les=85/86 n=5 ec=54/42 lis/c=85/85 les/c/f=86/86/0 sis=124 pruub=14.050911903s) [0] r=-1 lpr=124 pi=[85,124)/1 crt=49'1136 mlcod 0'0 unknown NOTIFY pruub 230.828186035s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:28:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 21 23:28:45 compute-0 ceph-mon[74318]: osdmap e124: 3 total, 3 up, 3 in
Jan 21 23:28:45 compute-0 ceph-mon[74318]: 11.7 scrub starts
Jan 21 23:28:45 compute-0 ceph-mon[74318]: 11.7 scrub ok
Jan 21 23:28:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 21 23:28:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:28:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:46.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:28:46 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 21 23:28:46 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 21 23:28:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:46.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 21 23:28:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 21 23:28:46 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 21 23:28:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 126 pg[9.1a( v 49'1136 (0'0,49'1136] local-lis/les=85/86 n=5 ec=54/42 lis/c=85/85 les/c/f=86/86/0 sis=126) [0]/[1] r=0 lpr=126 pi=[85,126)/1 crt=49'1136 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:46 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 126 pg[9.1a( v 49'1136 (0'0,49'1136] local-lis/les=85/86 n=5 ec=54/42 lis/c=85/85 les/c/f=86/86/0 sis=126) [0]/[1] r=0 lpr=126 pi=[85,126)/1 crt=49'1136 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 23:28:46 compute-0 ceph-mon[74318]: 11.6 scrub starts
Jan 21 23:28:46 compute-0 ceph-mon[74318]: 11.6 scrub ok
Jan 21 23:28:46 compute-0 ceph-mon[74318]: 11.a deep-scrub starts
Jan 21 23:28:46 compute-0 ceph-mon[74318]: 11.a deep-scrub ok
Jan 21 23:28:46 compute-0 ceph-mon[74318]: pgmap v270: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 21 23:28:46 compute-0 ceph-mon[74318]: osdmap e125: 3 total, 3 up, 3 in
Jan 21 23:28:46 compute-0 ceph-mon[74318]: 8.4 scrub starts
Jan 21 23:28:46 compute-0 ceph-mon[74318]: 8.4 scrub ok
Jan 21 23:28:46 compute-0 ceph-mon[74318]: osdmap e126: 3 total, 3 up, 3 in
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:28:46.861143) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038126861337, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7484, "num_deletes": 251, "total_data_size": 9459126, "memory_usage": 9625936, "flush_reason": "Manual Compaction"}
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038126964071, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7774400, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 136, "largest_seqno": 7611, "table_properties": {"data_size": 7746516, "index_size": 18361, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 78602, "raw_average_key_size": 23, "raw_value_size": 7681188, "raw_average_value_size": 2284, "num_data_blocks": 809, "num_entries": 3363, "num_filter_entries": 3363, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037772, "oldest_key_time": 1769037772, "file_creation_time": 1769038126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 102979 microseconds, and 22713 cpu microseconds.
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:28:46.964146) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7774400 bytes OK
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:28:46.964167) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:28:46.965946) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:28:46.965965) EVENT_LOG_v1 {"time_micros": 1769038126965959, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:28:46.965984) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9426455, prev total WAL file size 9426455, number of live WAL files 2.
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:28:46.968147) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7592KB) 13(50KB) 8(1944B)]
Jan 21 23:28:46 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038126968273, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7827722, "oldest_snapshot_seqno": -1}
Jan 21 23:28:47 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3171 keys, 7784971 bytes, temperature: kUnknown
Jan 21 23:28:47 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038127068690, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7784971, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7757568, "index_size": 18397, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7941, "raw_key_size": 76412, "raw_average_key_size": 24, "raw_value_size": 7694042, "raw_average_value_size": 2426, "num_data_blocks": 813, "num_entries": 3171, "num_filter_entries": 3171, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769038126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:28:47 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:28:47 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:28:47.068957) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7784971 bytes
Jan 21 23:28:47 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:28:47.071469) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 77.9 rd, 77.5 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3462, records dropped: 291 output_compression: NoCompression
Jan 21 23:28:47 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:28:47.071499) EVENT_LOG_v1 {"time_micros": 1769038127071484, "job": 4, "event": "compaction_finished", "compaction_time_micros": 100505, "compaction_time_cpu_micros": 20685, "output_level": 6, "num_output_files": 1, "total_output_size": 7784971, "num_input_records": 3462, "num_output_records": 3171, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 23:28:47 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:28:47 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038127073157, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 21 23:28:47 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:28:47 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038127073259, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 21 23:28:47 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:28:47 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038127073300, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 21 23:28:47 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:28:46.968013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:28:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:28:47 compute-0 sudo[101024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:28:47 compute-0 sudo[101024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:47 compute-0 sudo[101024]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:47 compute-0 sudo[101049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:28:47 compute-0 sudo[101049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:47 compute-0 sudo[101049]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:47 compute-0 sudo[101074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:28:47 compute-0 sudo[101074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:47 compute-0 sudo[101074]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:47 compute-0 sudo[101099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 21 23:28:47 compute-0 sudo[101099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Jan 21 23:28:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 21 23:28:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 21 23:28:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 21 23:28:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 21 23:28:47 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 21 23:28:47 compute-0 ceph-mon[74318]: 11.9 scrub starts
Jan 21 23:28:47 compute-0 ceph-mon[74318]: 11.9 scrub ok
Jan 21 23:28:47 compute-0 ceph-mon[74318]: 8.c deep-scrub starts
Jan 21 23:28:47 compute-0 ceph-mon[74318]: 8.c deep-scrub ok
Jan 21 23:28:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 21 23:28:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 21 23:28:47 compute-0 ceph-mon[74318]: osdmap e127: 3 total, 3 up, 3 in
Jan 21 23:28:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:48.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:48 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 127 pg[9.1a( v 49'1136 (0'0,49'1136] local-lis/les=126/127 n=5 ec=54/42 lis/c=85/85 les/c/f=86/86/0 sis=126) [0]/[1] async=[0] r=0 lpr=126 pi=[85,126)/1 crt=49'1136 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:28:48 compute-0 podman[101196]: 2026-01-21 23:28:48.439667741 +0000 UTC m=+0.172843457 container exec 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:28:48 compute-0 podman[101196]: 2026-01-21 23:28:48.605100161 +0000 UTC m=+0.338275877 container exec_died 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:28:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:28:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:48.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:28:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 21 23:28:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 21 23:28:48 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 21 23:28:48 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 128 pg[9.1a( v 49'1136 (0'0,49'1136] local-lis/les=126/127 n=5 ec=54/42 lis/c=126/85 les/c/f=127/86/0 sis=128 pruub=15.217485428s) [0] async=[0] r=-1 lpr=128 pi=[85,128)/1 crt=49'1136 mlcod 49'1136 active pruub 235.074645996s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:48 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 128 pg[9.1a( v 49'1136 (0'0,49'1136] local-lis/les=126/127 n=5 ec=54/42 lis/c=126/85 les/c/f=127/86/0 sis=128 pruub=15.217374802s) [0] r=-1 lpr=128 pi=[85,128)/1 crt=49'1136 mlcod 0'0 unknown NOTIFY pruub 235.074645996s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:28:48 compute-0 ceph-mon[74318]: pgmap v273: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:28:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:28:49 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 21 23:28:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:49 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 21 23:28:49 compute-0 podman[101346]: 2026-01-21 23:28:49.385786753 +0000 UTC m=+0.059453742 container exec fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 21 23:28:49 compute-0 podman[101346]: 2026-01-21 23:28:49.40790079 +0000 UTC m=+0.081567749 container exec_died fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 21 23:28:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:28:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:28:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:49 compute-0 podman[101411]: 2026-01-21 23:28:49.696181498 +0000 UTC m=+0.084931107 container exec 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, description=keepalived for Ceph, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., version=2.2.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, build-date=2023-02-22T09:23:20, vcs-type=git, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, io.openshift.expose-services=)
Jan 21 23:28:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 1 active+remapped, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Jan 21 23:28:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Jan 21 23:28:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 21 23:28:49 compute-0 podman[101411]: 2026-01-21 23:28:49.743888773 +0000 UTC m=+0.132638382 container exec_died 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, distribution-scope=public, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, version=2.2.4, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1793)
Jan 21 23:28:49 compute-0 sudo[101099]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:28:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:28:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 21 23:28:49 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 21 23:28:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 21 23:28:49 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 21 23:28:49 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 129 pg[9.1d( v 49'1136 (0'0,49'1136] local-lis/les=93/94 n=5 ec=54/42 lis/c=93/93 les/c/f=94/94/0 sis=129 pruub=12.249215126s) [2] r=-1 lpr=129 pi=[93,129)/1 crt=49'1136 mlcod 0'0 active pruub 233.115432739s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:49 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 129 pg[9.1d( v 49'1136 (0'0,49'1136] local-lis/les=93/94 n=5 ec=54/42 lis/c=93/93 les/c/f=94/94/0 sis=129 pruub=12.248760223s) [2] r=-1 lpr=129 pi=[93,129)/1 crt=49'1136 mlcod 0'0 unknown NOTIFY pruub 233.115432739s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:28:49 compute-0 ceph-mon[74318]: osdmap e128: 3 total, 3 up, 3 in
Jan 21 23:28:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:49 compute-0 ceph-mon[74318]: 8.1b scrub starts
Jan 21 23:28:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:49 compute-0 ceph-mon[74318]: 8.1b scrub ok
Jan 21 23:28:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 21 23:28:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 21 23:28:49 compute-0 ceph-mon[74318]: osdmap e129: 3 total, 3 up, 3 in
Jan 21 23:28:49 compute-0 sudo[101446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:28:49 compute-0 sudo[101446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:49 compute-0 sudo[101446]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:50 compute-0 sudo[101471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:28:50 compute-0 sudo[101471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:50 compute-0 sudo[101471]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:28:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:50.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:28:50 compute-0 sudo[101496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:28:50 compute-0 sudo[101496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:50 compute-0 sudo[101496]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:50 compute-0 sudo[101521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:28:50 compute-0 sudo[101521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:50 compute-0 sudo[101521]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:28:50 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:28:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:28:50 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:28:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:28:50 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:50 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 2c179c45-a66d-41b5-b77c-fe641e48722b does not exist
Jan 21 23:28:50 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev e67ca474-01ae-4876-a847-996b6086fab3 does not exist
Jan 21 23:28:50 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 621db5e9-f93a-4e09-a3bb-730a5062f76b does not exist
Jan 21 23:28:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:28:50 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:28:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:28:50 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:28:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:28:50 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:28:50 compute-0 sudo[101578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:28:50 compute-0 sudo[101578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:50 compute-0 sudo[101578]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:28:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:50.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:28:50 compute-0 sudo[101604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:28:50 compute-0 sudo[101604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:50 compute-0 sudo[101604]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 21 23:28:50 compute-0 ceph-mon[74318]: pgmap v276: 305 pgs: 1 active+remapped, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Jan 21 23:28:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:28:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:28:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:28:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:28:50 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:28:50 compute-0 sudo[101630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:28:50 compute-0 sudo[101630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:50 compute-0 sudo[101630]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 21 23:28:50 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 21 23:28:50 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 130 pg[9.1d( v 49'1136 (0'0,49'1136] local-lis/les=93/94 n=5 ec=54/42 lis/c=93/93 les/c/f=94/94/0 sis=130) [2]/[1] r=0 lpr=130 pi=[93,130)/1 crt=49'1136 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:50 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 130 pg[9.1d( v 49'1136 (0'0,49'1136] local-lis/les=93/94 n=5 ec=54/42 lis/c=93/93 les/c/f=94/94/0 sis=130) [2]/[1] r=0 lpr=130 pi=[93,130)/1 crt=49'1136 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 23:28:51 compute-0 sudo[101655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:28:51 compute-0 sudo[101655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:51 compute-0 podman[101724]: 2026-01-21 23:28:51.460644506 +0000 UTC m=+0.057530831 container create 9ed53bd5f225c108ed86b2909654ce5855aded26ec30eefc134ec71100891e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_blackwell, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 23:28:51 compute-0 systemd[75939]: Created slice User Background Tasks Slice.
Jan 21 23:28:51 compute-0 systemd[75939]: Starting Cleanup of User's Temporary Files and Directories...
Jan 21 23:28:51 compute-0 systemd[1]: Started libpod-conmon-9ed53bd5f225c108ed86b2909654ce5855aded26ec30eefc134ec71100891e4a.scope.
Jan 21 23:28:51 compute-0 systemd[75939]: Finished Cleanup of User's Temporary Files and Directories.
Jan 21 23:28:51 compute-0 podman[101724]: 2026-01-21 23:28:51.430525423 +0000 UTC m=+0.027411808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:28:51 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:28:51 compute-0 podman[101724]: 2026-01-21 23:28:51.560521459 +0000 UTC m=+0.157407784 container init 9ed53bd5f225c108ed86b2909654ce5855aded26ec30eefc134ec71100891e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:28:51 compute-0 podman[101724]: 2026-01-21 23:28:51.568636009 +0000 UTC m=+0.165522314 container start 9ed53bd5f225c108ed86b2909654ce5855aded26ec30eefc134ec71100891e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_blackwell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:28:51 compute-0 podman[101724]: 2026-01-21 23:28:51.572164192 +0000 UTC m=+0.169050517 container attach 9ed53bd5f225c108ed86b2909654ce5855aded26ec30eefc134ec71100891e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_blackwell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 21 23:28:51 compute-0 inspiring_blackwell[101742]: 167 167
Jan 21 23:28:51 compute-0 systemd[1]: libpod-9ed53bd5f225c108ed86b2909654ce5855aded26ec30eefc134ec71100891e4a.scope: Deactivated successfully.
Jan 21 23:28:51 compute-0 podman[101724]: 2026-01-21 23:28:51.575687845 +0000 UTC m=+0.172574190 container died 9ed53bd5f225c108ed86b2909654ce5855aded26ec30eefc134ec71100891e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_blackwell, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 23:28:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-76492ca7847cf9891e9f2854db3e2e89567c18212f82a137ccbcf6d98f725654-merged.mount: Deactivated successfully.
Jan 21 23:28:51 compute-0 podman[101724]: 2026-01-21 23:28:51.638626057 +0000 UTC m=+0.235512382 container remove 9ed53bd5f225c108ed86b2909654ce5855aded26ec30eefc134ec71100891e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_blackwell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:28:51 compute-0 systemd[1]: libpod-conmon-9ed53bd5f225c108ed86b2909654ce5855aded26ec30eefc134ec71100891e4a.scope: Deactivated successfully.
Jan 21 23:28:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 21 23:28:51 compute-0 podman[101768]: 2026-01-21 23:28:51.865985397 +0000 UTC m=+0.058399239 container create 9bc84179518f58d2edc7cd43e8632e61e52f9af6ad40b614cb689966eaacd488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:28:51 compute-0 systemd[1]: Started libpod-conmon-9bc84179518f58d2edc7cd43e8632e61e52f9af6ad40b614cb689966eaacd488.scope.
Jan 21 23:28:51 compute-0 podman[101768]: 2026-01-21 23:28:51.83637107 +0000 UTC m=+0.028784902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:28:51 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:28:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 21 23:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/694f7f071f006d7e9e8314d90ca4b84b5446b2f4a60ccc6cdb37d56b19eefa95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/694f7f071f006d7e9e8314d90ca4b84b5446b2f4a60ccc6cdb37d56b19eefa95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/694f7f071f006d7e9e8314d90ca4b84b5446b2f4a60ccc6cdb37d56b19eefa95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/694f7f071f006d7e9e8314d90ca4b84b5446b2f4a60ccc6cdb37d56b19eefa95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/694f7f071f006d7e9e8314d90ca4b84b5446b2f4a60ccc6cdb37d56b19eefa95/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:28:51 compute-0 ceph-mon[74318]: 11.b scrub starts
Jan 21 23:28:51 compute-0 ceph-mon[74318]: 11.b scrub ok
Jan 21 23:28:51 compute-0 ceph-mon[74318]: 11.13 scrub starts
Jan 21 23:28:51 compute-0 ceph-mon[74318]: 11.13 scrub ok
Jan 21 23:28:51 compute-0 ceph-mon[74318]: osdmap e130: 3 total, 3 up, 3 in
Jan 21 23:28:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 21 23:28:51 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 21 23:28:51 compute-0 podman[101768]: 2026-01-21 23:28:51.997221843 +0000 UTC m=+0.189635675 container init 9bc84179518f58d2edc7cd43e8632e61e52f9af6ad40b614cb689966eaacd488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:28:52 compute-0 podman[101768]: 2026-01-21 23:28:52.006156119 +0000 UTC m=+0.198569941 container start 9bc84179518f58d2edc7cd43e8632e61e52f9af6ad40b614cb689966eaacd488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:28:52 compute-0 podman[101768]: 2026-01-21 23:28:52.00995552 +0000 UTC m=+0.202369352 container attach 9bc84179518f58d2edc7cd43e8632e61e52f9af6ad40b614cb689966eaacd488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 21 23:28:52 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 131 pg[9.1d( v 49'1136 (0'0,49'1136] local-lis/les=130/131 n=5 ec=54/42 lis/c=93/93 les/c/f=94/94/0 sis=130) [2]/[1] async=[2] r=0 lpr=130 pi=[93,130)/1 crt=49'1136 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:28:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:52.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 23:28:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 21 23:28:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 21 23:28:52 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 21 23:28:52 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 132 pg[9.1d( v 49'1136 (0'0,49'1136] local-lis/les=130/131 n=5 ec=54/42 lis/c=130/93 les/c/f=131/94/0 sis=132 pruub=15.626190186s) [2] async=[2] r=-1 lpr=132 pi=[93,132)/1 crt=49'1136 mlcod 49'1136 active pruub 238.963699341s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:52 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 132 pg[9.1d( v 49'1136 (0'0,49'1136] local-lis/les=130/131 n=5 ec=54/42 lis/c=130/93 les/c/f=131/94/0 sis=132 pruub=15.625821114s) [2] r=-1 lpr=132 pi=[93,132)/1 crt=49'1136 mlcod 0'0 unknown NOTIFY pruub 238.963699341s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:28:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:52.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:52 compute-0 angry_cray[101784]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:28:52 compute-0 angry_cray[101784]: --> relative data size: 1.0
Jan 21 23:28:52 compute-0 angry_cray[101784]: --> All data devices are unavailable
Jan 21 23:28:52 compute-0 systemd[1]: libpod-9bc84179518f58d2edc7cd43e8632e61e52f9af6ad40b614cb689966eaacd488.scope: Deactivated successfully.
Jan 21 23:28:52 compute-0 podman[101768]: 2026-01-21 23:28:52.881049713 +0000 UTC m=+1.073463525 container died 9bc84179518f58d2edc7cd43e8632e61e52f9af6ad40b614cb689966eaacd488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:28:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-694f7f071f006d7e9e8314d90ca4b84b5446b2f4a60ccc6cdb37d56b19eefa95-merged.mount: Deactivated successfully.
Jan 21 23:28:52 compute-0 podman[101768]: 2026-01-21 23:28:52.939066208 +0000 UTC m=+1.131480010 container remove 9bc84179518f58d2edc7cd43e8632e61e52f9af6ad40b614cb689966eaacd488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 21 23:28:52 compute-0 systemd[1]: libpod-conmon-9bc84179518f58d2edc7cd43e8632e61e52f9af6ad40b614cb689966eaacd488.scope: Deactivated successfully.
Jan 21 23:28:52 compute-0 sudo[101655]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:53 compute-0 ceph-mon[74318]: 11.c scrub starts
Jan 21 23:28:53 compute-0 ceph-mon[74318]: 11.c scrub ok
Jan 21 23:28:53 compute-0 ceph-mon[74318]: pgmap v279: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 21 23:28:53 compute-0 ceph-mon[74318]: osdmap e131: 3 total, 3 up, 3 in
Jan 21 23:28:53 compute-0 ceph-mon[74318]: osdmap e132: 3 total, 3 up, 3 in
Jan 21 23:28:53 compute-0 sudo[101810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:28:53 compute-0 sudo[101810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:53 compute-0 sudo[101810]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:53 compute-0 sudo[101835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:28:53 compute-0 sudo[101835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:53 compute-0 sudo[101835]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:53 compute-0 sudo[101861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:28:53 compute-0 sudo[101861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:53 compute-0 sudo[101861]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:53 compute-0 sudo[101887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:28:53 compute-0 sudo[101887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 21 23:28:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 21 23:28:53 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 21 23:28:53 compute-0 podman[101952]: 2026-01-21 23:28:53.588869976 +0000 UTC m=+0.053759050 container create 96cd3b8a375869929b88dacaa5e7f421e3decdda39e3bf28dd5b4f1a4a9f3694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 21 23:28:53 compute-0 systemd[1]: Started libpod-conmon-96cd3b8a375869929b88dacaa5e7f421e3decdda39e3bf28dd5b4f1a4a9f3694.scope.
Jan 21 23:28:53 compute-0 podman[101952]: 2026-01-21 23:28:53.563154283 +0000 UTC m=+0.028043437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:28:53 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:28:53 compute-0 podman[101952]: 2026-01-21 23:28:53.695701071 +0000 UTC m=+0.160590285 container init 96cd3b8a375869929b88dacaa5e7f421e3decdda39e3bf28dd5b4f1a4a9f3694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:28:53 compute-0 podman[101952]: 2026-01-21 23:28:53.706530438 +0000 UTC m=+0.171419532 container start 96cd3b8a375869929b88dacaa5e7f421e3decdda39e3bf28dd5b4f1a4a9f3694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:28:53 compute-0 podman[101952]: 2026-01-21 23:28:53.710387251 +0000 UTC m=+0.175276345 container attach 96cd3b8a375869929b88dacaa5e7f421e3decdda39e3bf28dd5b4f1a4a9f3694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:28:53 compute-0 magical_dewdney[101968]: 167 167
Jan 21 23:28:53 compute-0 systemd[1]: libpod-96cd3b8a375869929b88dacaa5e7f421e3decdda39e3bf28dd5b4f1a4a9f3694.scope: Deactivated successfully.
Jan 21 23:28:53 compute-0 podman[101952]: 2026-01-21 23:28:53.713694177 +0000 UTC m=+0.178583321 container died 96cd3b8a375869929b88dacaa5e7f421e3decdda39e3bf28dd5b4f1a4a9f3694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d193559e0ae16482eced6676e1c1968749a72090b2be38752ae230d25ea724cc-merged.mount: Deactivated successfully.
Jan 21 23:28:53 compute-0 podman[101952]: 2026-01-21 23:28:53.773280582 +0000 UTC m=+0.238169686 container remove 96cd3b8a375869929b88dacaa5e7f421e3decdda39e3bf28dd5b4f1a4a9f3694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:28:53 compute-0 systemd[1]: libpod-conmon-96cd3b8a375869929b88dacaa5e7f421e3decdda39e3bf28dd5b4f1a4a9f3694.scope: Deactivated successfully.
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.361378652521869e-06 of space, bias 1.0, pg target 0.0019084135957565607 quantized to 32 (current 32)
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:28:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:28:53 compute-0 podman[101994]: 2026-01-21 23:28:53.964322121 +0000 UTC m=+0.056476057 container create b6c4d8fd5a86eebc69616a1ed7c22cf3e1935e6bbe6094ffe8e5221c0be1e7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 21 23:28:54 compute-0 systemd[1]: Started libpod-conmon-b6c4d8fd5a86eebc69616a1ed7c22cf3e1935e6bbe6094ffe8e5221c0be1e7a9.scope.
Jan 21 23:28:54 compute-0 ceph-mon[74318]: 11.d scrub starts
Jan 21 23:28:54 compute-0 ceph-mon[74318]: 11.d scrub ok
Jan 21 23:28:54 compute-0 ceph-mon[74318]: osdmap e133: 3 total, 3 up, 3 in
Jan 21 23:28:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:28:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:54.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:28:54 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:28:54 compute-0 podman[101994]: 2026-01-21 23:28:53.947206234 +0000 UTC m=+0.039360200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:28:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac4d107f4a1b238c648f716d38ed9a6dcd09749c74ef181b1288d7c51028f80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:28:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac4d107f4a1b238c648f716d38ed9a6dcd09749c74ef181b1288d7c51028f80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:28:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac4d107f4a1b238c648f716d38ed9a6dcd09749c74ef181b1288d7c51028f80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:28:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac4d107f4a1b238c648f716d38ed9a6dcd09749c74ef181b1288d7c51028f80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:28:54 compute-0 podman[101994]: 2026-01-21 23:28:54.063706148 +0000 UTC m=+0.155860124 container init b6c4d8fd5a86eebc69616a1ed7c22cf3e1935e6bbe6094ffe8e5221c0be1e7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dubinsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 21 23:28:54 compute-0 podman[101994]: 2026-01-21 23:28:54.078787051 +0000 UTC m=+0.170941037 container start b6c4d8fd5a86eebc69616a1ed7c22cf3e1935e6bbe6094ffe8e5221c0be1e7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dubinsky, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 21 23:28:54 compute-0 podman[101994]: 2026-01-21 23:28:54.083525812 +0000 UTC m=+0.175679798 container attach b6c4d8fd5a86eebc69616a1ed7c22cf3e1935e6bbe6094ffe8e5221c0be1e7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:28:54 compute-0 sudo[102018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:28:54 compute-0 sudo[102018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:54 compute-0 sudo[102018]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:54 compute-0 sudo[102043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:28:54 compute-0 sudo[102043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:54 compute-0 sudo[102043]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:54.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]: {
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:     "1": [
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:         {
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:             "devices": [
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:                 "/dev/loop3"
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:             ],
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:             "lv_name": "ceph_lv0",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:             "lv_size": "7511998464",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:             "name": "ceph_lv0",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:             "tags": {
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:                 "ceph.cluster_name": "ceph",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:                 "ceph.crush_device_class": "",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:                 "ceph.encrypted": "0",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:                 "ceph.osd_id": "1",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:                 "ceph.type": "block",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:                 "ceph.vdo": "0"
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:             },
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:             "type": "block",
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:             "vg_name": "ceph_vg0"
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:         }
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]:     ]
Jan 21 23:28:54 compute-0 pedantic_dubinsky[102010]: }
Jan 21 23:28:54 compute-0 systemd[1]: libpod-b6c4d8fd5a86eebc69616a1ed7c22cf3e1935e6bbe6094ffe8e5221c0be1e7a9.scope: Deactivated successfully.
Jan 21 23:28:54 compute-0 podman[101994]: 2026-01-21 23:28:54.904295156 +0000 UTC m=+0.996449112 container died b6c4d8fd5a86eebc69616a1ed7c22cf3e1935e6bbe6094ffe8e5221c0be1e7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 21 23:28:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-bac4d107f4a1b238c648f716d38ed9a6dcd09749c74ef181b1288d7c51028f80-merged.mount: Deactivated successfully.
Jan 21 23:28:54 compute-0 podman[101994]: 2026-01-21 23:28:54.97542214 +0000 UTC m=+1.067576096 container remove b6c4d8fd5a86eebc69616a1ed7c22cf3e1935e6bbe6094ffe8e5221c0be1e7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 21 23:28:54 compute-0 systemd[1]: libpod-conmon-b6c4d8fd5a86eebc69616a1ed7c22cf3e1935e6bbe6094ffe8e5221c0be1e7a9.scope: Deactivated successfully.
Jan 21 23:28:55 compute-0 ceph-mon[74318]: pgmap v283: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:55 compute-0 sudo[101887]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:55 compute-0 sudo[102098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:28:55 compute-0 sudo[102098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:55 compute-0 sudo[102098]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:55 compute-0 sudo[102125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:28:55 compute-0 sudo[102125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:55 compute-0 sudo[102125]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:55 compute-0 sudo[102155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:28:55 compute-0 sudo[102155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:55 compute-0 sudo[102155]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:55 compute-0 sudo[102180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:28:55 compute-0 sudo[102180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:55 compute-0 podman[102251]: 2026-01-21 23:28:55.656006391 +0000 UTC m=+0.051494897 container create 6197f3d109923f8091f6937b3f7e59a86e8e51b5fe7a89e7ab69b29aef1f5407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_taussig, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:28:55 compute-0 systemd[1]: Started libpod-conmon-6197f3d109923f8091f6937b3f7e59a86e8e51b5fe7a89e7ab69b29aef1f5407.scope.
Jan 21 23:28:55 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:28:55 compute-0 podman[102251]: 2026-01-21 23:28:55.732203628 +0000 UTC m=+0.127692144 container init 6197f3d109923f8091f6937b3f7e59a86e8e51b5fe7a89e7ab69b29aef1f5407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 23:28:55 compute-0 podman[102251]: 2026-01-21 23:28:55.638981957 +0000 UTC m=+0.034470473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:28:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Jan 21 23:28:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 21 23:28:55 compute-0 podman[102251]: 2026-01-21 23:28:55.744216712 +0000 UTC m=+0.139705218 container start 6197f3d109923f8091f6937b3f7e59a86e8e51b5fe7a89e7ab69b29aef1f5407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_taussig, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 23:28:55 compute-0 eloquent_taussig[102268]: 167 167
Jan 21 23:28:55 compute-0 podman[102251]: 2026-01-21 23:28:55.74792267 +0000 UTC m=+0.143411176 container attach 6197f3d109923f8091f6937b3f7e59a86e8e51b5fe7a89e7ab69b29aef1f5407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_taussig, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 21 23:28:55 compute-0 systemd[1]: libpod-6197f3d109923f8091f6937b3f7e59a86e8e51b5fe7a89e7ab69b29aef1f5407.scope: Deactivated successfully.
Jan 21 23:28:55 compute-0 podman[102251]: 2026-01-21 23:28:55.748783918 +0000 UTC m=+0.144272464 container died 6197f3d109923f8091f6937b3f7e59a86e8e51b5fe7a89e7ab69b29aef1f5407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:28:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c5d7443b5a4f9434f39f44b4e6303cd29f230155ae86a67cb9c56846e6ca11f-merged.mount: Deactivated successfully.
Jan 21 23:28:55 compute-0 podman[102251]: 2026-01-21 23:28:55.797799196 +0000 UTC m=+0.193287712 container remove 6197f3d109923f8091f6937b3f7e59a86e8e51b5fe7a89e7ab69b29aef1f5407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_taussig, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:28:55 compute-0 systemd[1]: libpod-conmon-6197f3d109923f8091f6937b3f7e59a86e8e51b5fe7a89e7ab69b29aef1f5407.scope: Deactivated successfully.
Jan 21 23:28:55 compute-0 podman[102300]: 2026-01-21 23:28:55.960220849 +0000 UTC m=+0.043366298 container create 08ce23fe7ec30d756d5b1346823bf857e52d463c3521f6ea232379d5f9341b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 21 23:28:56 compute-0 systemd[1]: Started libpod-conmon-08ce23fe7ec30d756d5b1346823bf857e52d463c3521f6ea232379d5f9341b21.scope.
Jan 21 23:28:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 21 23:28:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:28:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:56.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:28:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 21 23:28:56 compute-0 podman[102300]: 2026-01-21 23:28:55.943878426 +0000 UTC m=+0.027023915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:28:56 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:28:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8e5ab73800afe99563f4d4f9d6ce535e4ec8b0d546b8ad2664e6ffde174bda8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:28:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8e5ab73800afe99563f4d4f9d6ce535e4ec8b0d546b8ad2664e6ffde174bda8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:28:56 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 21 23:28:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8e5ab73800afe99563f4d4f9d6ce535e4ec8b0d546b8ad2664e6ffde174bda8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:28:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 21 23:28:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8e5ab73800afe99563f4d4f9d6ce535e4ec8b0d546b8ad2664e6ffde174bda8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:28:56 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 21 23:28:56 compute-0 podman[102300]: 2026-01-21 23:28:56.066826148 +0000 UTC m=+0.149971617 container init 08ce23fe7ec30d756d5b1346823bf857e52d463c3521f6ea232379d5f9341b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 21 23:28:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 134 pg[9.1e( v 49'1136 (0'0,49'1136] local-lis/les=73/74 n=5 ec=54/42 lis/c=73/73 les/c/f=74/74/0 sis=134 pruub=15.556155205s) [0] r=-1 lpr=134 pi=[73,134)/1 crt=49'1136 mlcod 0'0 active pruub 242.581054688s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:56 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 134 pg[9.1e( v 49'1136 (0'0,49'1136] local-lis/les=73/74 n=5 ec=54/42 lis/c=73/73 les/c/f=74/74/0 sis=134 pruub=15.556083679s) [0] r=-1 lpr=134 pi=[73,134)/1 crt=49'1136 mlcod 0'0 unknown NOTIFY pruub 242.581054688s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:28:56 compute-0 podman[102300]: 2026-01-21 23:28:56.081831227 +0000 UTC m=+0.164976676 container start 08ce23fe7ec30d756d5b1346823bf857e52d463c3521f6ea232379d5f9341b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 23:28:56 compute-0 podman[102300]: 2026-01-21 23:28:56.086967861 +0000 UTC m=+0.170113330 container attach 08ce23fe7ec30d756d5b1346823bf857e52d463c3521f6ea232379d5f9341b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 21 23:28:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:56.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:56 compute-0 funny_mahavira[102317]: {
Jan 21 23:28:56 compute-0 funny_mahavira[102317]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:28:56 compute-0 funny_mahavira[102317]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:28:56 compute-0 funny_mahavira[102317]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:28:56 compute-0 funny_mahavira[102317]:         "osd_id": 1,
Jan 21 23:28:56 compute-0 funny_mahavira[102317]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:28:56 compute-0 funny_mahavira[102317]:         "type": "bluestore"
Jan 21 23:28:56 compute-0 funny_mahavira[102317]:     }
Jan 21 23:28:56 compute-0 funny_mahavira[102317]: }
Jan 21 23:28:56 compute-0 systemd[1]: libpod-08ce23fe7ec30d756d5b1346823bf857e52d463c3521f6ea232379d5f9341b21.scope: Deactivated successfully.
Jan 21 23:28:56 compute-0 podman[102300]: 2026-01-21 23:28:56.943400455 +0000 UTC m=+1.026545904 container died 08ce23fe7ec30d756d5b1346823bf857e52d463c3521f6ea232379d5f9341b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 23:28:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8e5ab73800afe99563f4d4f9d6ce535e4ec8b0d546b8ad2664e6ffde174bda8-merged.mount: Deactivated successfully.
Jan 21 23:28:57 compute-0 podman[102300]: 2026-01-21 23:28:57.019170668 +0000 UTC m=+1.102316117 container remove 08ce23fe7ec30d756d5b1346823bf857e52d463c3521f6ea232379d5f9341b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:28:57 compute-0 systemd[1]: libpod-conmon-08ce23fe7ec30d756d5b1346823bf857e52d463c3521f6ea232379d5f9341b21.scope: Deactivated successfully.
Jan 21 23:28:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 21 23:28:57 compute-0 sudo[102180]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:57 compute-0 ceph-mon[74318]: 11.10 deep-scrub starts
Jan 21 23:28:57 compute-0 ceph-mon[74318]: 11.10 deep-scrub ok
Jan 21 23:28:57 compute-0 ceph-mon[74318]: pgmap v284: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:28:57 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 21 23:28:57 compute-0 ceph-mon[74318]: osdmap e134: 3 total, 3 up, 3 in
Jan 21 23:28:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:28:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 21 23:28:57 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 21 23:28:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 135 pg[9.1e( v 49'1136 (0'0,49'1136] local-lis/les=73/74 n=5 ec=54/42 lis/c=73/73 les/c/f=74/74/0 sis=135) [0]/[1] r=0 lpr=135 pi=[73,135)/1 crt=49'1136 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:57 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 135 pg[9.1e( v 49'1136 (0'0,49'1136] local-lis/les=73/74 n=5 ec=54/42 lis/c=73/73 les/c/f=74/74/0 sis=135) [0]/[1] r=0 lpr=135 pi=[73,135)/1 crt=49'1136 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 23:28:57 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:28:57 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:57 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev d0d1d810-27f2-43ed-aa18-b7b871cf1123 does not exist
Jan 21 23:28:57 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 23a5df64-0b59-40f3-8ed2-f55f09124a3e does not exist
Jan 21 23:28:57 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev c25790a1-2149-42b1-b618-79e86c7911a3 does not exist
Jan 21 23:28:57 compute-0 sudo[102351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:28:57 compute-0 sudo[102351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:57 compute-0 sudo[102351]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:57 compute-0 sudo[102377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:28:57 compute-0 sudo[102377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:28:57 compute-0 sudo[102377]: pam_unix(sudo:session): session closed for user root
Jan 21 23:28:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:28:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 40 B/s, 0 objects/s recovering
Jan 21 23:28:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 21 23:28:57 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:28:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:28:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:28:58.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:28:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 21 23:28:58 compute-0 ceph-mon[74318]: 11.11 scrub starts
Jan 21 23:28:58 compute-0 ceph-mon[74318]: 11.11 scrub ok
Jan 21 23:28:58 compute-0 ceph-mon[74318]: 8.15 deep-scrub starts
Jan 21 23:28:58 compute-0 ceph-mon[74318]: 8.15 deep-scrub ok
Jan 21 23:28:58 compute-0 ceph-mon[74318]: osdmap e135: 3 total, 3 up, 3 in
Jan 21 23:28:58 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:58 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:28:58 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 21 23:28:58 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:28:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 21 23:28:58 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 21 23:28:58 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 136 pg[9.1f( v 49'1136 (0'0,49'1136] local-lis/les=97/98 n=5 ec=54/42 lis/c=97/97 les/c/f=98/98/0 sis=136 pruub=8.190554619s) [0] r=-1 lpr=136 pi=[97,136)/1 crt=49'1136 mlcod 0'0 active pruub 237.257339478s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:58 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 136 pg[9.1f( v 49'1136 (0'0,49'1136] local-lis/les=97/98 n=5 ec=54/42 lis/c=97/97 les/c/f=98/98/0 sis=136 pruub=8.190503120s) [0] r=-1 lpr=136 pi=[97,136)/1 crt=49'1136 mlcod 0'0 unknown NOTIFY pruub 237.257339478s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:28:58 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 136 pg[9.1e( v 49'1136 (0'0,49'1136] local-lis/les=135/136 n=5 ec=54/42 lis/c=73/73 les/c/f=74/74/0 sis=135) [0]/[1] async=[0] r=0 lpr=135 pi=[73,135)/1 crt=49'1136 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:28:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:28:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:28:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:28:58.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:28:59 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 21 23:28:59 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 21 23:28:59 compute-0 ceph-mon[74318]: pgmap v287: 305 pgs: 305 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 40 B/s, 0 objects/s recovering
Jan 21 23:28:59 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 23:28:59 compute-0 ceph-mon[74318]: osdmap e136: 3 total, 3 up, 3 in
Jan 21 23:28:59 compute-0 ceph-mon[74318]: 11.1b scrub starts
Jan 21 23:28:59 compute-0 ceph-mon[74318]: 11.1b scrub ok
Jan 21 23:28:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 21 23:28:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 21 23:28:59 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 21 23:28:59 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 137 pg[9.1f( v 49'1136 (0'0,49'1136] local-lis/les=97/98 n=5 ec=54/42 lis/c=97/97 les/c/f=98/98/0 sis=137) [0]/[1] r=0 lpr=137 pi=[97,137)/1 crt=49'1136 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:59 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 137 pg[9.1f( v 49'1136 (0'0,49'1136] local-lis/les=97/98 n=5 ec=54/42 lis/c=97/97 les/c/f=98/98/0 sis=137) [0]/[1] r=0 lpr=137 pi=[97,137)/1 crt=49'1136 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 23:28:59 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 137 pg[9.1e( v 49'1136 (0'0,49'1136] local-lis/les=135/136 n=5 ec=54/42 lis/c=135/73 les/c/f=136/74/0 sis=137 pruub=14.986126900s) [0] async=[0] r=-1 lpr=137 pi=[73,137)/1 crt=49'1136 mlcod 49'1136 active pruub 245.078872681s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:28:59 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 137 pg[9.1e( v 49'1136 (0'0,49'1136] local-lis/les=135/136 n=5 ec=54/42 lis/c=135/73 les/c/f=136/74/0 sis=137 pruub=14.986031532s) [0] r=-1 lpr=137 pi=[73,137)/1 crt=49'1136 mlcod 0'0 unknown NOTIFY pruub 245.078872681s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:28:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 1 active+remapped, 304 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 21 23:29:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:00.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 21 23:29:00 compute-0 ceph-mon[74318]: osdmap e137: 3 total, 3 up, 3 in
Jan 21 23:29:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 21 23:29:00 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 21 23:29:00 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 138 pg[9.1f( v 49'1136 (0'0,49'1136] local-lis/les=137/138 n=5 ec=54/42 lis/c=97/97 les/c/f=98/98/0 sis=137) [0]/[1] async=[0] r=0 lpr=137 pi=[97,137)/1 crt=49'1136 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 23:29:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:00.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:01 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 21 23:29:01 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 21 23:29:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 21 23:29:01 compute-0 ceph-mon[74318]: pgmap v290: 305 pgs: 1 active+remapped, 304 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 21 23:29:01 compute-0 ceph-mon[74318]: osdmap e138: 3 total, 3 up, 3 in
Jan 21 23:29:01 compute-0 ceph-mon[74318]: 8.8 scrub starts
Jan 21 23:29:01 compute-0 ceph-mon[74318]: 8.8 scrub ok
Jan 21 23:29:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 21 23:29:01 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 21 23:29:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 139 pg[9.1f( v 49'1136 (0'0,49'1136] local-lis/les=137/138 n=5 ec=54/42 lis/c=137/97 les/c/f=138/98/0 sis=139 pruub=14.989238739s) [0] async=[0] r=-1 lpr=139 pi=[97,139)/1 crt=49'1136 mlcod 49'1136 active pruub 247.127166748s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 21 23:29:01 compute-0 ceph-osd[84656]: osd.1 pg_epoch: 139 pg[9.1f( v 49'1136 (0'0,49'1136] local-lis/les=137/138 n=5 ec=54/42 lis/c=137/97 les/c/f=138/98/0 sis=139 pruub=14.989165306s) [0] r=-1 lpr=139 pi=[97,139)/1 crt=49'1136 mlcod 0'0 unknown NOTIFY pruub 247.127166748s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 23:29:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 23:29:02 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 21 23:29:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:29:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:02.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:29:02 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 21 23:29:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 21 23:29:02 compute-0 ceph-mon[74318]: osdmap e139: 3 total, 3 up, 3 in
Jan 21 23:29:02 compute-0 ceph-mon[74318]: 11.15 deep-scrub starts
Jan 21 23:29:02 compute-0 ceph-mon[74318]: 11.15 deep-scrub ok
Jan 21 23:29:02 compute-0 ceph-mon[74318]: 8.3 deep-scrub starts
Jan 21 23:29:02 compute-0 ceph-mon[74318]: 8.3 deep-scrub ok
Jan 21 23:29:02 compute-0 ceph-mon[74318]: 8.18 scrub starts
Jan 21 23:29:02 compute-0 ceph-mon[74318]: 8.18 scrub ok
Jan 21 23:29:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 21 23:29:02 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 21 23:29:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:29:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:02.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:03 compute-0 ceph-mon[74318]: pgmap v293: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 23:29:03 compute-0 ceph-mon[74318]: osdmap e140: 3 total, 3 up, 3 in
Jan 21 23:29:03 compute-0 ceph-mon[74318]: 11.16 deep-scrub starts
Jan 21 23:29:03 compute-0 ceph-mon[74318]: 11.16 deep-scrub ok
Jan 21 23:29:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:04.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:04 compute-0 ceph-mon[74318]: 11.18 deep-scrub starts
Jan 21 23:29:04 compute-0 ceph-mon[74318]: 11.18 deep-scrub ok
Jan 21 23:29:04 compute-0 ceph-mon[74318]: 8.a deep-scrub starts
Jan 21 23:29:04 compute-0 ceph-mon[74318]: 8.a deep-scrub ok
Jan 21 23:29:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:04.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:05 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 21 23:29:05 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 21 23:29:05 compute-0 ceph-mon[74318]: pgmap v295: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:05 compute-0 ceph-mon[74318]: 11.1e scrub starts
Jan 21 23:29:05 compute-0 ceph-mon[74318]: 11.1e scrub ok
Jan 21 23:29:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 458 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 23:29:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:29:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:06.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:29:06 compute-0 ceph-mon[74318]: pgmap v296: 305 pgs: 305 active+clean; 458 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 23:29:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:06.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:07 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 21 23:29:07 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 21 23:29:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:29:07 compute-0 ceph-mon[74318]: 11.1f scrub starts
Jan 21 23:29:07 compute-0 ceph-mon[74318]: 11.1f scrub ok
Jan 21 23:29:07 compute-0 ceph-mon[74318]: 8.10 scrub starts
Jan 21 23:29:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 458 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 23:29:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:08.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:08 compute-0 ceph-mon[74318]: 8.10 scrub ok
Jan 21 23:29:08 compute-0 ceph-mon[74318]: pgmap v297: 305 pgs: 305 active+clean; 458 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 23:29:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:08.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:29:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:29:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:29:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:29:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:29:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:29:09 compute-0 ceph-mon[74318]: 10.13 scrub starts
Jan 21 23:29:09 compute-0 ceph-mon[74318]: 10.13 scrub ok
Jan 21 23:29:09 compute-0 ceph-mon[74318]: 7.1d scrub starts
Jan 21 23:29:09 compute-0 ceph-mon[74318]: 7.1d scrub ok
Jan 21 23:29:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 458 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 23:29:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:29:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:10.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:29:10 compute-0 ceph-mon[74318]: 10.11 scrub starts
Jan 21 23:29:10 compute-0 ceph-mon[74318]: 10.11 scrub ok
Jan 21 23:29:10 compute-0 ceph-mon[74318]: pgmap v298: 305 pgs: 305 active+clean; 458 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 23:29:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:10.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 458 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 23:29:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:12.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:12 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 21 23:29:12 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 21 23:29:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:29:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:12.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:12 compute-0 ceph-mon[74318]: pgmap v299: 305 pgs: 305 active+clean; 458 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 23:29:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 458 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 23:29:13 compute-0 ceph-mon[74318]: 11.1d scrub starts
Jan 21 23:29:13 compute-0 ceph-mon[74318]: 11.1d scrub ok
Jan 21 23:29:13 compute-0 ceph-mon[74318]: 7.16 scrub starts
Jan 21 23:29:13 compute-0 ceph-mon[74318]: 7.16 scrub ok
Jan 21 23:29:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:14.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:14 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 21 23:29:14 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 21 23:29:14 compute-0 sudo[102438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:29:14 compute-0 sudo[102438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:29:14 compute-0 sudo[102438]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:14 compute-0 sudo[102463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:29:14 compute-0 sudo[102463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:29:14 compute-0 sudo[102463]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:14.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:14 compute-0 ceph-mon[74318]: 7.1e scrub starts
Jan 21 23:29:14 compute-0 ceph-mon[74318]: 7.1e scrub ok
Jan 21 23:29:14 compute-0 ceph-mon[74318]: pgmap v300: 305 pgs: 305 active+clean; 458 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 23:29:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 458 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 23:29:16 compute-0 ceph-mon[74318]: 8.19 scrub starts
Jan 21 23:29:16 compute-0 ceph-mon[74318]: 8.19 scrub ok
Jan 21 23:29:16 compute-0 ceph-mon[74318]: 10.18 scrub starts
Jan 21 23:29:16 compute-0 ceph-mon[74318]: 10.18 scrub ok
Jan 21 23:29:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:29:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:16.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:29:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:16.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:17 compute-0 ceph-mon[74318]: pgmap v301: 305 pgs: 305 active+clean; 458 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 23:29:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:29:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 458 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:18.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:18 compute-0 ceph-mon[74318]: 10.1 scrub starts
Jan 21 23:29:18 compute-0 ceph-mon[74318]: 10.1 scrub ok
Jan 21 23:29:18 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.1c deep-scrub starts
Jan 21 23:29:18 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.1c deep-scrub ok
Jan 21 23:29:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:18.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:19 compute-0 ceph-mon[74318]: pgmap v302: 305 pgs: 305 active+clean; 458 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 458 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:20.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:20 compute-0 ceph-mon[74318]: 11.1c deep-scrub starts
Jan 21 23:29:20 compute-0 ceph-mon[74318]: 11.1c deep-scrub ok
Jan 21 23:29:20 compute-0 ceph-mon[74318]: 10.12 scrub starts
Jan 21 23:29:20 compute-0 ceph-mon[74318]: 10.12 scrub ok
Jan 21 23:29:20 compute-0 ceph-mon[74318]: 10.1b scrub starts
Jan 21 23:29:20 compute-0 ceph-mon[74318]: 10.1b scrub ok
Jan 21 23:29:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:20.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:21 compute-0 ceph-mon[74318]: pgmap v303: 305 pgs: 305 active+clean; 458 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:21 compute-0 ceph-mon[74318]: 7.4 scrub starts
Jan 21 23:29:21 compute-0 ceph-mon[74318]: 7.4 scrub ok
Jan 21 23:29:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:22.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:29:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:22.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:23 compute-0 ceph-mon[74318]: pgmap v304: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:24.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:24 compute-0 ceph-mon[74318]: 7.13 deep-scrub starts
Jan 21 23:29:24 compute-0 ceph-mon[74318]: 7.13 deep-scrub ok
Jan 21 23:29:24 compute-0 ceph-mon[74318]: pgmap v305: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:24.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:25 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 21 23:29:25 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 21 23:29:25 compute-0 ceph-mon[74318]: 7.10 scrub starts
Jan 21 23:29:25 compute-0 ceph-mon[74318]: 7.10 scrub ok
Jan 21 23:29:25 compute-0 ceph-mon[74318]: 10.10 scrub starts
Jan 21 23:29:25 compute-0 ceph-mon[74318]: 10.10 scrub ok
Jan 21 23:29:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:26.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:26 compute-0 sudo[100948]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:26 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 21 23:29:26 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 21 23:29:26 compute-0 ceph-mon[74318]: 8.12 scrub starts
Jan 21 23:29:26 compute-0 ceph-mon[74318]: 8.12 scrub ok
Jan 21 23:29:26 compute-0 ceph-mon[74318]: pgmap v306: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:26.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:27 compute-0 sudo[102643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmyqwtqkqjojhhxgyrrdtebtwxslxdim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038166.8014247-369-44017833793950/AnsiballZ_command.py'
Jan 21 23:29:27 compute-0 sudo[102643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:27 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 21 23:29:27 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 21 23:29:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:29:27 compute-0 python3.9[102645]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:29:27 compute-0 ceph-mon[74318]: 11.12 scrub starts
Jan 21 23:29:27 compute-0 ceph-mon[74318]: 11.12 scrub ok
Jan 21 23:29:27 compute-0 ceph-mon[74318]: 7.3 scrub starts
Jan 21 23:29:27 compute-0 ceph-mon[74318]: 7.3 scrub ok
Jan 21 23:29:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:28.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:28 compute-0 sudo[102643]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:28 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 21 23:29:28 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 21 23:29:28 compute-0 ceph-mon[74318]: 11.f scrub starts
Jan 21 23:29:28 compute-0 ceph-mon[74318]: 11.f scrub ok
Jan 21 23:29:28 compute-0 ceph-mon[74318]: pgmap v307: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:28 compute-0 ceph-mon[74318]: 7.1f scrub starts
Jan 21 23:29:28 compute-0 ceph-mon[74318]: 7.1f scrub ok
Jan 21 23:29:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:29:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:28.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:29:29 compute-0 sudo[102931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeleuzuydkvkfxytlmytxaslbrfdpphi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038168.398886-393-106038586370018/AnsiballZ_selinux.py'
Jan 21 23:29:29 compute-0 sudo[102931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:29 compute-0 ceph-mon[74318]: 11.1a scrub starts
Jan 21 23:29:29 compute-0 ceph-mon[74318]: 11.1a scrub ok
Jan 21 23:29:29 compute-0 ceph-mon[74318]: 10.19 scrub starts
Jan 21 23:29:29 compute-0 ceph-mon[74318]: 10.19 scrub ok
Jan 21 23:29:29 compute-0 ceph-mon[74318]: 8.2 deep-scrub starts
Jan 21 23:29:29 compute-0 ceph-mon[74318]: 8.2 deep-scrub ok
Jan 21 23:29:29 compute-0 python3.9[102934]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 21 23:29:29 compute-0 sudo[102931]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:30.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:30 compute-0 sudo[103084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njuacnnvzvatzcxybiaqtjippckckfgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038169.9428833-426-175388682230422/AnsiballZ_command.py'
Jan 21 23:29:30 compute-0 sudo[103084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:30 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 21 23:29:30 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 21 23:29:30 compute-0 python3.9[103086]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 21 23:29:30 compute-0 sudo[103084]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:30 compute-0 ceph-mon[74318]: pgmap v308: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:29:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:30.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:29:31 compute-0 sudo[103236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amiuldelqoffsxemjbfomeihbkqrnkjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038170.7022786-450-51354164191507/AnsiballZ_file.py'
Jan 21 23:29:31 compute-0 sudo[103236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:31 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Jan 21 23:29:31 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Jan 21 23:29:31 compute-0 python3.9[103238]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:29:31 compute-0 sudo[103236]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:31 compute-0 ceph-mon[74318]: 5.1b scrub starts
Jan 21 23:29:31 compute-0 ceph-mon[74318]: 5.1b scrub ok
Jan 21 23:29:31 compute-0 ceph-mon[74318]: 8.d scrub starts
Jan 21 23:29:31 compute-0 ceph-mon[74318]: 8.d scrub ok
Jan 21 23:29:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:29:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:32.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:29:32 compute-0 sudo[103389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pntconceajsmqysdeejnyjrllcivueky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038171.6003456-474-198266606822882/AnsiballZ_mount.py'
Jan 21 23:29:32 compute-0 sudo[103389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:29:32 compute-0 python3.9[103391]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 21 23:29:32 compute-0 sudo[103389]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:32 compute-0 ceph-mon[74318]: 5.1c deep-scrub starts
Jan 21 23:29:32 compute-0 ceph-mon[74318]: 5.1c deep-scrub ok
Jan 21 23:29:32 compute-0 ceph-mon[74318]: 7.b scrub starts
Jan 21 23:29:32 compute-0 ceph-mon[74318]: 7.b scrub ok
Jan 21 23:29:32 compute-0 ceph-mon[74318]: pgmap v309: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:32.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:33 compute-0 ceph-mon[74318]: 10.5 scrub starts
Jan 21 23:29:33 compute-0 ceph-mon[74318]: 10.5 scrub ok
Jan 21 23:29:33 compute-0 ceph-mon[74318]: 11.e scrub starts
Jan 21 23:29:33 compute-0 ceph-mon[74318]: 11.e scrub ok
Jan 21 23:29:33 compute-0 sudo[103542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnfeqwojcnmxjsukpvakbhvmkkmgeikg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038173.3799314-558-84570950982402/AnsiballZ_file.py'
Jan 21 23:29:33 compute-0 sudo[103542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:33 compute-0 python3.9[103544]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:29:33 compute-0 sudo[103542]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:34.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:34 compute-0 sudo[103694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfqsjifggfmtpeqyhjpcoskdieoygxct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038174.1922514-582-209376756337062/AnsiballZ_stat.py'
Jan 21 23:29:34 compute-0 sudo[103694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:34 compute-0 sudo[103697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:29:34 compute-0 sudo[103697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:29:34 compute-0 sudo[103697]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:34 compute-0 ceph-mon[74318]: 6.1 deep-scrub starts
Jan 21 23:29:34 compute-0 ceph-mon[74318]: pgmap v310: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:34 compute-0 ceph-mon[74318]: 6.1 deep-scrub ok
Jan 21 23:29:34 compute-0 sudo[103722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:29:34 compute-0 sudo[103722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:29:34 compute-0 sudo[103722]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:34 compute-0 python3.9[103696]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:29:34 compute-0 sudo[103694]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:34.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:35 compute-0 sudo[103822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqsfkfmqsvvnaynyugzmpnarfvrjcybv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038174.1922514-582-209376756337062/AnsiballZ_file.py'
Jan 21 23:29:35 compute-0 sudo[103822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:35 compute-0 python3.9[103824]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:29:35 compute-0 sudo[103822]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:36.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:36 compute-0 sudo[103975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dswgrjuqircorudfwhwlqkwctfbyqrnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038176.155361-645-201332356793824/AnsiballZ_stat.py'
Jan 21 23:29:36 compute-0 sudo[103975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:36 compute-0 python3.9[103977]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:29:36 compute-0 sudo[103975]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:36 compute-0 ceph-mon[74318]: pgmap v311: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:36.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:29:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:37 compute-0 ceph-mon[74318]: 8.f scrub starts
Jan 21 23:29:37 compute-0 ceph-mon[74318]: 8.f scrub ok
Jan 21 23:29:37 compute-0 sudo[104130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdljqldsdvejhnouvuifqnyznontbhar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038177.4261065-684-96127117648357/AnsiballZ_getent.py'
Jan 21 23:29:37 compute-0 sudo[104130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:29:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:38.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:29:38 compute-0 python3.9[104132]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 21 23:29:38 compute-0 sudo[104130]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:38 compute-0 sudo[104283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvlurqihikuqmaturilsgyshozuagzix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038178.5223722-714-50863782791062/AnsiballZ_getent.py'
Jan 21 23:29:38 compute-0 sudo[104283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:38.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:38 compute-0 ceph-mon[74318]: 8.5 scrub starts
Jan 21 23:29:38 compute-0 ceph-mon[74318]: 8.5 scrub ok
Jan 21 23:29:38 compute-0 ceph-mon[74318]: pgmap v312: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:39 compute-0 python3.9[104285]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 21 23:29:39 compute-0 sudo[104283]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:29:39
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.control', 'default.rgw.log', 'vms', 'backups', 'images', 'cephfs.cephfs.meta', '.rgw.root']
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:29:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:39 compute-0 sudo[104437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvybuhavkypedirrysjygwqiplgnsyjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038179.3728828-738-53281548218175/AnsiballZ_group.py'
Jan 21 23:29:39 compute-0 sudo[104437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:29:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:40.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:29:40 compute-0 python3.9[104439]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 23:29:40 compute-0 sudo[104437]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:40 compute-0 sudo[104589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvjzqxhkrlbymhsstwnabpxvicgfgods ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038180.4524038-765-218468012364369/AnsiballZ_file.py'
Jan 21 23:29:40 compute-0 sudo[104589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:40.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:40 compute-0 ceph-mon[74318]: pgmap v313: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:40 compute-0 python3.9[104591]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 21 23:29:41 compute-0 sudo[104589]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:42 compute-0 sudo[104742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmdgkqrpfkgvrjhgmiritoyfjudjivke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038181.6747851-798-166902999541966/AnsiballZ_dnf.py'
Jan 21 23:29:42 compute-0 sudo[104742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:42.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:29:42 compute-0 python3.9[104744]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:29:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:29:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:42.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:29:42 compute-0 ceph-mon[74318]: 8.6 scrub starts
Jan 21 23:29:42 compute-0 ceph-mon[74318]: 8.6 scrub ok
Jan 21 23:29:42 compute-0 ceph-mon[74318]: pgmap v314: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:43 compute-0 sudo[104742]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:29:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:44.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:29:44 compute-0 sudo[104896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udpaltidtwbumnokncgdffiozldpbrzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038183.8631377-822-18938504836599/AnsiballZ_file.py'
Jan 21 23:29:44 compute-0 sudo[104896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:44 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 21 23:29:44 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 21 23:29:44 compute-0 python3.9[104898]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:29:44 compute-0 sudo[104896]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:29:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:44.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:29:45 compute-0 ceph-mon[74318]: 7.8 scrub starts
Jan 21 23:29:45 compute-0 ceph-mon[74318]: 7.8 scrub ok
Jan 21 23:29:45 compute-0 ceph-mon[74318]: pgmap v315: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:45 compute-0 sudo[105048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omrdcomwhyqcpgyuoizxvyjtgsmtzklw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038184.6902008-846-277807420890378/AnsiballZ_stat.py'
Jan 21 23:29:45 compute-0 sudo[105048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:45 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 21 23:29:45 compute-0 python3.9[105050]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:29:45 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 21 23:29:45 compute-0 sudo[105048]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:45 compute-0 sudo[105127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aofgibejrcgagkyctroiwcjrcijkxqaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038184.6902008-846-277807420890378/AnsiballZ_file.py'
Jan 21 23:29:45 compute-0 sudo[105127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:45 compute-0 python3.9[105129]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:29:45 compute-0 sudo[105127]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:46 compute-0 ceph-mon[74318]: 5.1f scrub starts
Jan 21 23:29:46 compute-0 ceph-mon[74318]: 5.1f scrub ok
Jan 21 23:29:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:46.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:46 compute-0 sudo[105279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-intwimuvkgihfxplkosgewletyiytppn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038186.02092-885-73592072878939/AnsiballZ_stat.py'
Jan 21 23:29:46 compute-0 sudo[105279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:46 compute-0 python3.9[105281]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:29:46 compute-0 sudo[105279]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:29:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:46.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:29:47 compute-0 sudo[105357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yubrutowjkhkxucnytxgersbsedtobpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038186.02092-885-73592072878939/AnsiballZ_file.py'
Jan 21 23:29:47 compute-0 sudo[105357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:47 compute-0 ceph-mon[74318]: 5.9 scrub starts
Jan 21 23:29:47 compute-0 ceph-mon[74318]: 5.9 scrub ok
Jan 21 23:29:47 compute-0 ceph-mon[74318]: pgmap v316: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:47 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 21 23:29:47 compute-0 python3.9[105359]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:29:47 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 21 23:29:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:29:47 compute-0 sudo[105357]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:48.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:48 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 21 23:29:48 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 21 23:29:48 compute-0 sudo[105510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxijlhznirwlxzxtprwstnxdnzoadfet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038188.093082-930-201470605311387/AnsiballZ_dnf.py'
Jan 21 23:29:48 compute-0 sudo[105510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:48 compute-0 python3.9[105512]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:29:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:48.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:49 compute-0 ceph-mon[74318]: 5.15 scrub starts
Jan 21 23:29:49 compute-0 ceph-mon[74318]: 5.15 scrub ok
Jan 21 23:29:49 compute-0 ceph-mon[74318]: 7.9 scrub starts
Jan 21 23:29:49 compute-0 ceph-mon[74318]: 7.9 scrub ok
Jan 21 23:29:49 compute-0 ceph-mon[74318]: pgmap v317: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:49 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 21 23:29:49 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 21 23:29:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:49 compute-0 sudo[105510]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:29:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:50.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:29:50 compute-0 ceph-mon[74318]: 5.2 scrub starts
Jan 21 23:29:50 compute-0 ceph-mon[74318]: 5.2 scrub ok
Jan 21 23:29:50 compute-0 ceph-mon[74318]: 5.f scrub starts
Jan 21 23:29:50 compute-0 ceph-mon[74318]: 5.f scrub ok
Jan 21 23:29:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:50.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:51 compute-0 python3.9[105664]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:29:51 compute-0 ceph-mon[74318]: pgmap v318: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:51 compute-0 python3.9[105817]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 21 23:29:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:29:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:52.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:29:52 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.7 deep-scrub starts
Jan 21 23:29:52 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.7 deep-scrub ok
Jan 21 23:29:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:29:52 compute-0 ceph-mon[74318]: pgmap v319: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:52 compute-0 python3.9[105967]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:29:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:52.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:53 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 21 23:29:53 compute-0 ceph-mon[74318]: 5.7 deep-scrub starts
Jan 21 23:29:53 compute-0 ceph-mon[74318]: 5.7 deep-scrub ok
Jan 21 23:29:53 compute-0 ceph-mon[74318]: 7.6 scrub starts
Jan 21 23:29:53 compute-0 ceph-mon[74318]: 7.6 scrub ok
Jan 21 23:29:53 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:29:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:29:53 compute-0 sudo[106118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhehiqdevbsaebqsvlgavftnobqlvvny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038193.2929127-1053-22602700528125/AnsiballZ_systemd.py'
Jan 21 23:29:53 compute-0 sudo[106118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:54.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:54 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 21 23:29:54 compute-0 python3.9[106120]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:29:54 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 21 23:29:54 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 21 23:29:54 compute-0 ceph-mon[74318]: 5.18 scrub starts
Jan 21 23:29:54 compute-0 ceph-mon[74318]: 5.18 scrub ok
Jan 21 23:29:54 compute-0 ceph-mon[74318]: pgmap v320: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:54 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 21 23:29:54 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 21 23:29:54 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 21 23:29:54 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 21 23:29:54 compute-0 sudo[106118]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:54 compute-0 sudo[106157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:29:54 compute-0 sudo[106157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:29:54 compute-0 sudo[106157]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:54 compute-0 sudo[106182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:29:54 compute-0 sudo[106182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:29:54 compute-0 sudo[106182]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:29:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:54.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:29:55 compute-0 ceph-mon[74318]: 5.1 scrub starts
Jan 21 23:29:55 compute-0 ceph-mon[74318]: 5.1 scrub ok
Jan 21 23:29:55 compute-0 ceph-mon[74318]: 10.8 scrub starts
Jan 21 23:29:55 compute-0 ceph-mon[74318]: 10.8 scrub ok
Jan 21 23:29:55 compute-0 ceph-mon[74318]: 11.19 scrub starts
Jan 21 23:29:55 compute-0 ceph-mon[74318]: 11.19 scrub ok
Jan 21 23:29:55 compute-0 python3.9[106333]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 21 23:29:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:56.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:56 compute-0 ceph-mon[74318]: 11.3 scrub starts
Jan 21 23:29:56 compute-0 ceph-mon[74318]: 11.3 scrub ok
Jan 21 23:29:56 compute-0 ceph-mon[74318]: pgmap v321: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:56.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:29:57 compute-0 ceph-mon[74318]: 7.2 deep-scrub starts
Jan 21 23:29:57 compute-0 ceph-mon[74318]: 7.2 deep-scrub ok
Jan 21 23:29:57 compute-0 sudo[106359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:29:57 compute-0 sudo[106359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:29:57 compute-0 sudo[106359]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:57 compute-0 sudo[106384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:29:57 compute-0 sudo[106384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:29:57 compute-0 sudo[106384]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:57 compute-0 sudo[106409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:29:57 compute-0 sudo[106409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:29:57 compute-0 sudo[106409]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:57 compute-0 sudo[106434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 21 23:29:57 compute-0 sudo[106434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:29:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:29:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:29:58.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:29:58 compute-0 podman[106532]: 2026-01-21 23:29:58.435417491 +0000 UTC m=+0.092257435 container exec 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:29:58 compute-0 ceph-mon[74318]: 10.2 scrub starts
Jan 21 23:29:58 compute-0 ceph-mon[74318]: pgmap v322: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:58 compute-0 ceph-mon[74318]: 10.2 scrub ok
Jan 21 23:29:58 compute-0 ceph-mon[74318]: 11.8 scrub starts
Jan 21 23:29:58 compute-0 ceph-mon[74318]: 11.8 scrub ok
Jan 21 23:29:58 compute-0 podman[106532]: 2026-01-21 23:29:58.530117559 +0000 UTC m=+0.186957513 container exec_died 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 23:29:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:29:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:29:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:29:58.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:29:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:29:59 compute-0 sudo[106776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tttlxyrhlsbbdxkdlieujuaxpddeeglj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038198.6651704-1224-170001094359621/AnsiballZ_systemd.py'
Jan 21 23:29:59 compute-0 sudo[106776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:29:59 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:29:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:29:59 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:29:59 compute-0 podman[106815]: 2026-01-21 23:29:59.248687123 +0000 UTC m=+0.059189134 container exec fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 21 23:29:59 compute-0 podman[106815]: 2026-01-21 23:29:59.268940587 +0000 UTC m=+0.079442608 container exec_died fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 21 23:29:59 compute-0 python3.9[106783]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:29:59 compute-0 sudo[106776]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:59 compute-0 podman[106879]: 2026-01-21 23:29:59.53184567 +0000 UTC m=+0.074255261 container exec 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, vcs-type=git, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public)
Jan 21 23:29:59 compute-0 podman[106879]: 2026-01-21 23:29:59.576049518 +0000 UTC m=+0.118459059 container exec_died 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=keepalived, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, distribution-scope=public, io.openshift.tags=Ceph keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, description=keepalived for Ceph)
Jan 21 23:29:59 compute-0 sudo[106434]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:29:59 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:29:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:29:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:29:59 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:29:59 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:29:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:29:59 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:29:59 compute-0 sudo[106986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:29:59 compute-0 sudo[106986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:29:59 compute-0 sudo[106986]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:29:59 compute-0 sudo[107034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:29:59 compute-0 sudo[107034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:29:59 compute-0 sudo[107034]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:59 compute-0 sudo[107083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:29:59 compute-0 sudo[107083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:29:59 compute-0 sudo[107083]: pam_unix(sudo:session): session closed for user root
Jan 21 23:29:59 compute-0 sudo[107133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clniarrbiotplxxvdcfczghbkqgrwvzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038199.5854492-1224-114733924339577/AnsiballZ_systemd.py'
Jan 21 23:29:59 compute-0 sudo[107133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:00 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 21 23:30:00 compute-0 sudo[107137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:30:00 compute-0 sudo[107137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:30:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:00.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:30:00 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:30:00 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:30:00 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:30:00 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:30:00 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:30:00 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:30:00 compute-0 ceph-mon[74318]: overall HEALTH_OK
Jan 21 23:30:00 compute-0 python3.9[107136]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:30:00 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 21 23:30:00 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 21 23:30:00 compute-0 sudo[107133]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:00 compute-0 sudo[107137]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:30:00 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:30:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:30:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:30:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:30:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:30:00 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 96e42718-1b75-4735-8b72-abf1e5b41dd8 does not exist
Jan 21 23:30:00 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 4b9ff82d-ed89-4553-81ca-1eb39bf37a25 does not exist
Jan 21 23:30:00 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev aa28a984-e19a-41ac-9511-8f5e8d728e98 does not exist
Jan 21 23:30:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:30:00 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:30:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:30:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:30:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:30:00 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:30:00 compute-0 sudo[107217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:30:00 compute-0 sudo[107217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:00 compute-0 sudo[107217]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:00 compute-0 sudo[107242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:30:00 compute-0 sudo[107242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:00 compute-0 sudo[107242]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:00 compute-0 sudo[107267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:30:00 compute-0 sudo[107267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:00 compute-0 sudo[107267]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:00 compute-0 sudo[107292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:30:00 compute-0 sudo[107292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:00.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:00 compute-0 sshd-session[98976]: Connection closed by 192.168.122.30 port 52826
Jan 21 23:30:00 compute-0 sshd-session[98973]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:30:00 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Jan 21 23:30:00 compute-0 systemd[1]: session-35.scope: Consumed 1min 12.257s CPU time.
Jan 21 23:30:00 compute-0 systemd-logind[786]: Session 35 logged out. Waiting for processes to exit.
Jan 21 23:30:00 compute-0 systemd-logind[786]: Removed session 35.
Jan 21 23:30:01 compute-0 ceph-mon[74318]: pgmap v323: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:01 compute-0 ceph-mon[74318]: 5.10 scrub starts
Jan 21 23:30:01 compute-0 ceph-mon[74318]: 5.10 scrub ok
Jan 21 23:30:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:30:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:30:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:30:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:30:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:30:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:30:01 compute-0 podman[107358]: 2026-01-21 23:30:01.198595913 +0000 UTC m=+0.040347623 container create 88b256dbe04c6e3035ddf35e7c8f51a86043f5538b785a49484e3fe271a7c228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:30:01 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.11 deep-scrub starts
Jan 21 23:30:01 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.11 deep-scrub ok
Jan 21 23:30:01 compute-0 systemd[1]: Started libpod-conmon-88b256dbe04c6e3035ddf35e7c8f51a86043f5538b785a49484e3fe271a7c228.scope.
Jan 21 23:30:01 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:30:01 compute-0 podman[107358]: 2026-01-21 23:30:01.178908367 +0000 UTC m=+0.020660127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:30:01 compute-0 podman[107358]: 2026-01-21 23:30:01.277909025 +0000 UTC m=+0.119660775 container init 88b256dbe04c6e3035ddf35e7c8f51a86043f5538b785a49484e3fe271a7c228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_blackwell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 23:30:01 compute-0 podman[107358]: 2026-01-21 23:30:01.284941688 +0000 UTC m=+0.126693398 container start 88b256dbe04c6e3035ddf35e7c8f51a86043f5538b785a49484e3fe271a7c228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_blackwell, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 21 23:30:01 compute-0 podman[107358]: 2026-01-21 23:30:01.288464595 +0000 UTC m=+0.130216315 container attach 88b256dbe04c6e3035ddf35e7c8f51a86043f5538b785a49484e3fe271a7c228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_blackwell, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:30:01 compute-0 frosty_blackwell[107374]: 167 167
Jan 21 23:30:01 compute-0 systemd[1]: libpod-88b256dbe04c6e3035ddf35e7c8f51a86043f5538b785a49484e3fe271a7c228.scope: Deactivated successfully.
Jan 21 23:30:01 compute-0 conmon[107374]: conmon 88b256dbe04c6e3035dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88b256dbe04c6e3035ddf35e7c8f51a86043f5538b785a49484e3fe271a7c228.scope/container/memory.events
Jan 21 23:30:01 compute-0 podman[107358]: 2026-01-21 23:30:01.290159916 +0000 UTC m=+0.131911646 container died 88b256dbe04c6e3035ddf35e7c8f51a86043f5538b785a49484e3fe271a7c228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:30:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c745a22db02e99fc584141ace7add508954f0569080d047a8189b9696cb6cab-merged.mount: Deactivated successfully.
Jan 21 23:30:01 compute-0 podman[107358]: 2026-01-21 23:30:01.33124616 +0000 UTC m=+0.172997900 container remove 88b256dbe04c6e3035ddf35e7c8f51a86043f5538b785a49484e3fe271a7c228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 21 23:30:01 compute-0 systemd[1]: libpod-conmon-88b256dbe04c6e3035ddf35e7c8f51a86043f5538b785a49484e3fe271a7c228.scope: Deactivated successfully.
Jan 21 23:30:01 compute-0 podman[107398]: 2026-01-21 23:30:01.481266655 +0000 UTC m=+0.038998012 container create 1b7a94406e19bb40abed6978c69cc451d36cca3fd7f627a896687f26d5702b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 21 23:30:01 compute-0 systemd[1]: Started libpod-conmon-1b7a94406e19bb40abed6978c69cc451d36cca3fd7f627a896687f26d5702b84.scope.
Jan 21 23:30:01 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7addf67fef14c4f07bf287fee7c754301317b4b517e8c404355b77254c012a6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7addf67fef14c4f07bf287fee7c754301317b4b517e8c404355b77254c012a6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7addf67fef14c4f07bf287fee7c754301317b4b517e8c404355b77254c012a6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7addf67fef14c4f07bf287fee7c754301317b4b517e8c404355b77254c012a6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7addf67fef14c4f07bf287fee7c754301317b4b517e8c404355b77254c012a6e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:30:01 compute-0 podman[107398]: 2026-01-21 23:30:01.462463495 +0000 UTC m=+0.020194882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:30:01 compute-0 podman[107398]: 2026-01-21 23:30:01.57819616 +0000 UTC m=+0.135927517 container init 1b7a94406e19bb40abed6978c69cc451d36cca3fd7f627a896687f26d5702b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:30:01 compute-0 podman[107398]: 2026-01-21 23:30:01.58546645 +0000 UTC m=+0.143197787 container start 1b7a94406e19bb40abed6978c69cc451d36cca3fd7f627a896687f26d5702b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 23:30:01 compute-0 podman[107398]: 2026-01-21 23:30:01.5887565 +0000 UTC m=+0.146487837 container attach 1b7a94406e19bb40abed6978c69cc451d36cca3fd7f627a896687f26d5702b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:30:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:02.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:02 compute-0 ceph-mon[74318]: 5.11 deep-scrub starts
Jan 21 23:30:02 compute-0 ceph-mon[74318]: 5.11 deep-scrub ok
Jan 21 23:30:02 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 21 23:30:02 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 21 23:30:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:30:02 compute-0 gracious_solomon[107414]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:30:02 compute-0 gracious_solomon[107414]: --> relative data size: 1.0
Jan 21 23:30:02 compute-0 gracious_solomon[107414]: --> All data devices are unavailable
Jan 21 23:30:02 compute-0 systemd[1]: libpod-1b7a94406e19bb40abed6978c69cc451d36cca3fd7f627a896687f26d5702b84.scope: Deactivated successfully.
Jan 21 23:30:02 compute-0 podman[107398]: 2026-01-21 23:30:02.434450525 +0000 UTC m=+0.992181902 container died 1b7a94406e19bb40abed6978c69cc451d36cca3fd7f627a896687f26d5702b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Jan 21 23:30:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7addf67fef14c4f07bf287fee7c754301317b4b517e8c404355b77254c012a6e-merged.mount: Deactivated successfully.
Jan 21 23:30:02 compute-0 podman[107398]: 2026-01-21 23:30:02.502640081 +0000 UTC m=+1.060371428 container remove 1b7a94406e19bb40abed6978c69cc451d36cca3fd7f627a896687f26d5702b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 21 23:30:02 compute-0 systemd[1]: libpod-conmon-1b7a94406e19bb40abed6978c69cc451d36cca3fd7f627a896687f26d5702b84.scope: Deactivated successfully.
Jan 21 23:30:02 compute-0 sudo[107292]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:02 compute-0 sudo[107441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:30:02 compute-0 sudo[107441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:02 compute-0 sudo[107441]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:02 compute-0 sudo[107466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:30:02 compute-0 sudo[107466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:02 compute-0 sudo[107466]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:02 compute-0 sudo[107491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:30:02 compute-0 sudo[107491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:02 compute-0 sudo[107491]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:02 compute-0 sudo[107516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:30:02 compute-0 sudo[107516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:30:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:02.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:30:03 compute-0 podman[107581]: 2026-01-21 23:30:03.188382961 +0000 UTC m=+0.061624308 container create 76c7a76960b4c2d34b565a905438fda9f8a4a4f1f53e5551cace1e7188969a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 21 23:30:03 compute-0 ceph-mon[74318]: pgmap v324: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:03 compute-0 ceph-mon[74318]: 5.16 scrub starts
Jan 21 23:30:03 compute-0 ceph-mon[74318]: 5.16 scrub ok
Jan 21 23:30:03 compute-0 ceph-mon[74318]: 8.b scrub starts
Jan 21 23:30:03 compute-0 ceph-mon[74318]: 8.b scrub ok
Jan 21 23:30:03 compute-0 systemd[1]: Started libpod-conmon-76c7a76960b4c2d34b565a905438fda9f8a4a4f1f53e5551cace1e7188969a14.scope.
Jan 21 23:30:03 compute-0 podman[107581]: 2026-01-21 23:30:03.159035041 +0000 UTC m=+0.032276418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:30:03 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:30:03 compute-0 podman[107581]: 2026-01-21 23:30:03.291024319 +0000 UTC m=+0.164265646 container init 76c7a76960b4c2d34b565a905438fda9f8a4a4f1f53e5551cace1e7188969a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_fermi, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:30:03 compute-0 podman[107581]: 2026-01-21 23:30:03.301178787 +0000 UTC m=+0.174420094 container start 76c7a76960b4c2d34b565a905438fda9f8a4a4f1f53e5551cace1e7188969a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_fermi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:30:03 compute-0 podman[107581]: 2026-01-21 23:30:03.304117026 +0000 UTC m=+0.177358333 container attach 76c7a76960b4c2d34b565a905438fda9f8a4a4f1f53e5551cace1e7188969a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:30:03 compute-0 sad_fermi[107599]: 167 167
Jan 21 23:30:03 compute-0 systemd[1]: libpod-76c7a76960b4c2d34b565a905438fda9f8a4a4f1f53e5551cace1e7188969a14.scope: Deactivated successfully.
Jan 21 23:30:03 compute-0 podman[107581]: 2026-01-21 23:30:03.307071285 +0000 UTC m=+0.180312592 container died 76c7a76960b4c2d34b565a905438fda9f8a4a4f1f53e5551cace1e7188969a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_fermi, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:30:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b6093fcaabcad7e880784bb168b3cf350593d4df14beb750ed34268b4e2f92d-merged.mount: Deactivated successfully.
Jan 21 23:30:03 compute-0 podman[107581]: 2026-01-21 23:30:03.34388175 +0000 UTC m=+0.217123057 container remove 76c7a76960b4c2d34b565a905438fda9f8a4a4f1f53e5551cace1e7188969a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_fermi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:30:03 compute-0 systemd[1]: libpod-conmon-76c7a76960b4c2d34b565a905438fda9f8a4a4f1f53e5551cace1e7188969a14.scope: Deactivated successfully.
Jan 21 23:30:03 compute-0 podman[107622]: 2026-01-21 23:30:03.550870639 +0000 UTC m=+0.059467211 container create dc01596fe05c8aee051eb967e26fe98cb7918dfe2ba939836479662e2efb4fd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:30:03 compute-0 systemd[1]: Started libpod-conmon-dc01596fe05c8aee051eb967e26fe98cb7918dfe2ba939836479662e2efb4fd6.scope.
Jan 21 23:30:03 compute-0 podman[107622]: 2026-01-21 23:30:03.533758261 +0000 UTC m=+0.042354833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:30:03 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6ef9d03bc4f393e3d0017e2dc2a0aeca3d492c79f6b08a026adb15a6126da8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6ef9d03bc4f393e3d0017e2dc2a0aeca3d492c79f6b08a026adb15a6126da8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6ef9d03bc4f393e3d0017e2dc2a0aeca3d492c79f6b08a026adb15a6126da8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6ef9d03bc4f393e3d0017e2dc2a0aeca3d492c79f6b08a026adb15a6126da8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:30:03 compute-0 podman[107622]: 2026-01-21 23:30:03.649240859 +0000 UTC m=+0.157837451 container init dc01596fe05c8aee051eb967e26fe98cb7918dfe2ba939836479662e2efb4fd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:30:03 compute-0 podman[107622]: 2026-01-21 23:30:03.661108489 +0000 UTC m=+0.169705051 container start dc01596fe05c8aee051eb967e26fe98cb7918dfe2ba939836479662e2efb4fd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:30:03 compute-0 podman[107622]: 2026-01-21 23:30:03.665226144 +0000 UTC m=+0.173822706 container attach dc01596fe05c8aee051eb967e26fe98cb7918dfe2ba939836479662e2efb4fd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_grothendieck, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:30:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:30:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:04.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]: {
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:     "1": [
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:         {
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:             "devices": [
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:                 "/dev/loop3"
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:             ],
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:             "lv_name": "ceph_lv0",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:             "lv_size": "7511998464",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:             "name": "ceph_lv0",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:             "tags": {
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:                 "ceph.cluster_name": "ceph",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:                 "ceph.crush_device_class": "",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:                 "ceph.encrypted": "0",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:                 "ceph.osd_id": "1",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:                 "ceph.type": "block",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:                 "ceph.vdo": "0"
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:             },
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:             "type": "block",
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:             "vg_name": "ceph_vg0"
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:         }
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]:     ]
Jan 21 23:30:04 compute-0 crazy_grothendieck[107638]: }
Jan 21 23:30:04 compute-0 systemd[1]: libpod-dc01596fe05c8aee051eb967e26fe98cb7918dfe2ba939836479662e2efb4fd6.scope: Deactivated successfully.
Jan 21 23:30:04 compute-0 podman[107622]: 2026-01-21 23:30:04.45945818 +0000 UTC m=+0.968054822 container died dc01596fe05c8aee051eb967e26fe98cb7918dfe2ba939836479662e2efb4fd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 23:30:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d6ef9d03bc4f393e3d0017e2dc2a0aeca3d492c79f6b08a026adb15a6126da8-merged.mount: Deactivated successfully.
Jan 21 23:30:04 compute-0 podman[107622]: 2026-01-21 23:30:04.525363315 +0000 UTC m=+1.033959867 container remove dc01596fe05c8aee051eb967e26fe98cb7918dfe2ba939836479662e2efb4fd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_grothendieck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:30:04 compute-0 systemd[1]: libpod-conmon-dc01596fe05c8aee051eb967e26fe98cb7918dfe2ba939836479662e2efb4fd6.scope: Deactivated successfully.
Jan 21 23:30:04 compute-0 sudo[107516]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:04 compute-0 sudo[107661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:30:04 compute-0 sudo[107661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:04 compute-0 sudo[107661]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:04 compute-0 sudo[107686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:30:04 compute-0 sudo[107686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:04 compute-0 sudo[107686]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:04 compute-0 sudo[107711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:30:04 compute-0 sudo[107711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:04 compute-0 sudo[107711]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:04 compute-0 sudo[107736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:30:04 compute-0 sudo[107736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:04.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:05 compute-0 podman[107803]: 2026-01-21 23:30:05.207452185 +0000 UTC m=+0.064595807 container create 71c7afd9cf59c79adaef74eb26a93a2773deb6742042fe8e2f0c2a20b786ec4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_fermi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:30:05 compute-0 ceph-mon[74318]: pgmap v325: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:05 compute-0 systemd[1]: Started libpod-conmon-71c7afd9cf59c79adaef74eb26a93a2773deb6742042fe8e2f0c2a20b786ec4a.scope.
Jan 21 23:30:05 compute-0 podman[107803]: 2026-01-21 23:30:05.180251481 +0000 UTC m=+0.037395163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:30:05 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:30:05 compute-0 podman[107803]: 2026-01-21 23:30:05.306458884 +0000 UTC m=+0.163602536 container init 71c7afd9cf59c79adaef74eb26a93a2773deb6742042fe8e2f0c2a20b786ec4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_fermi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:30:05 compute-0 podman[107803]: 2026-01-21 23:30:05.316822018 +0000 UTC m=+0.173965610 container start 71c7afd9cf59c79adaef74eb26a93a2773deb6742042fe8e2f0c2a20b786ec4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_fermi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:30:05 compute-0 podman[107803]: 2026-01-21 23:30:05.320738886 +0000 UTC m=+0.177882568 container attach 71c7afd9cf59c79adaef74eb26a93a2773deb6742042fe8e2f0c2a20b786ec4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_fermi, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 21 23:30:05 compute-0 inspiring_fermi[107819]: 167 167
Jan 21 23:30:05 compute-0 systemd[1]: libpod-71c7afd9cf59c79adaef74eb26a93a2773deb6742042fe8e2f0c2a20b786ec4a.scope: Deactivated successfully.
Jan 21 23:30:05 compute-0 conmon[107819]: conmon 71c7afd9cf59c79adaef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-71c7afd9cf59c79adaef74eb26a93a2773deb6742042fe8e2f0c2a20b786ec4a.scope/container/memory.events
Jan 21 23:30:05 compute-0 podman[107803]: 2026-01-21 23:30:05.324828671 +0000 UTC m=+0.181972313 container died 71c7afd9cf59c79adaef74eb26a93a2773deb6742042fe8e2f0c2a20b786ec4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_fermi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 21 23:30:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cfa5c75d7544dc0f258f63726a6a1b1141e4aef025413fa17aaa131f5663bb6-merged.mount: Deactivated successfully.
Jan 21 23:30:05 compute-0 podman[107803]: 2026-01-21 23:30:05.366628807 +0000 UTC m=+0.223772429 container remove 71c7afd9cf59c79adaef74eb26a93a2773deb6742042fe8e2f0c2a20b786ec4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_fermi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 23:30:05 compute-0 systemd[1]: libpod-conmon-71c7afd9cf59c79adaef74eb26a93a2773deb6742042fe8e2f0c2a20b786ec4a.scope: Deactivated successfully.
Jan 21 23:30:05 compute-0 podman[107842]: 2026-01-21 23:30:05.583902217 +0000 UTC m=+0.069651501 container create b423d2e53114406b64d06c7ed4adec7d30afbb6abf04b6fb893bf8ea3cac80b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_boyd, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:30:05 compute-0 systemd[1]: Started libpod-conmon-b423d2e53114406b64d06c7ed4adec7d30afbb6abf04b6fb893bf8ea3cac80b7.scope.
Jan 21 23:30:05 compute-0 podman[107842]: 2026-01-21 23:30:05.557683063 +0000 UTC m=+0.043432387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:30:05 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d97683e68a221f77a4318cdfe70677fc89bb5053f71e118cd6c343052aa923e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d97683e68a221f77a4318cdfe70677fc89bb5053f71e118cd6c343052aa923e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d97683e68a221f77a4318cdfe70677fc89bb5053f71e118cd6c343052aa923e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d97683e68a221f77a4318cdfe70677fc89bb5053f71e118cd6c343052aa923e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:30:05 compute-0 podman[107842]: 2026-01-21 23:30:05.681764841 +0000 UTC m=+0.167514195 container init b423d2e53114406b64d06c7ed4adec7d30afbb6abf04b6fb893bf8ea3cac80b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Jan 21 23:30:05 compute-0 podman[107842]: 2026-01-21 23:30:05.694485187 +0000 UTC m=+0.180234481 container start b423d2e53114406b64d06c7ed4adec7d30afbb6abf04b6fb893bf8ea3cac80b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 21 23:30:05 compute-0 podman[107842]: 2026-01-21 23:30:05.700135548 +0000 UTC m=+0.185884842 container attach b423d2e53114406b64d06c7ed4adec7d30afbb6abf04b6fb893bf8ea3cac80b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_boyd, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:30:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:06 compute-0 sshd-session[107863]: Accepted publickey for zuul from 192.168.122.30 port 50416 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:30:06 compute-0 systemd-logind[786]: New session 36 of user zuul.
Jan 21 23:30:06 compute-0 systemd[1]: Started Session 36 of User zuul.
Jan 21 23:30:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:30:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:06.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:30:06 compute-0 sshd-session[107863]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:30:06 compute-0 ceph-mon[74318]: 11.17 scrub starts
Jan 21 23:30:06 compute-0 ceph-mon[74318]: 11.17 scrub ok
Jan 21 23:30:06 compute-0 ceph-mon[74318]: 7.f scrub starts
Jan 21 23:30:06 compute-0 ceph-mon[74318]: 7.f scrub ok
Jan 21 23:30:06 compute-0 interesting_boyd[107858]: {
Jan 21 23:30:06 compute-0 interesting_boyd[107858]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:30:06 compute-0 interesting_boyd[107858]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:30:06 compute-0 interesting_boyd[107858]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:30:06 compute-0 interesting_boyd[107858]:         "osd_id": 1,
Jan 21 23:30:06 compute-0 interesting_boyd[107858]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:30:06 compute-0 interesting_boyd[107858]:         "type": "bluestore"
Jan 21 23:30:06 compute-0 interesting_boyd[107858]:     }
Jan 21 23:30:06 compute-0 interesting_boyd[107858]: }
Jan 21 23:30:06 compute-0 systemd[1]: libpod-b423d2e53114406b64d06c7ed4adec7d30afbb6abf04b6fb893bf8ea3cac80b7.scope: Deactivated successfully.
Jan 21 23:30:06 compute-0 podman[107842]: 2026-01-21 23:30:06.678093758 +0000 UTC m=+1.163843052 container died b423d2e53114406b64d06c7ed4adec7d30afbb6abf04b6fb893bf8ea3cac80b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_boyd, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 23:30:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d97683e68a221f77a4318cdfe70677fc89bb5053f71e118cd6c343052aa923e-merged.mount: Deactivated successfully.
Jan 21 23:30:06 compute-0 podman[107842]: 2026-01-21 23:30:06.764935969 +0000 UTC m=+1.250685233 container remove b423d2e53114406b64d06c7ed4adec7d30afbb6abf04b6fb893bf8ea3cac80b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:30:06 compute-0 systemd[1]: libpod-conmon-b423d2e53114406b64d06c7ed4adec7d30afbb6abf04b6fb893bf8ea3cac80b7.scope: Deactivated successfully.
Jan 21 23:30:06 compute-0 sudo[107736]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:06 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:30:06 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:30:06 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:30:06 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:30:06 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev b4c903f7-6e41-42c8-977b-b1e3d9069947 does not exist
Jan 21 23:30:06 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 7a6dca85-23f8-4833-bb29-47141eaed1b6 does not exist
Jan 21 23:30:06 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev c51b9ab2-6102-4ccf-afd9-a687fc212224 does not exist
Jan 21 23:30:06 compute-0 sudo[107980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:30:06 compute-0 sudo[107980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:06 compute-0 sudo[107980]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:30:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:06.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:30:06 compute-0 sudo[108022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:30:06 compute-0 sudo[108022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:06 compute-0 sudo[108022]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:30:07 compute-0 ceph-mon[74318]: pgmap v326: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:07 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:30:07 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:30:07 compute-0 python3.9[108097]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:30:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:30:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:08.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:30:08 compute-0 ceph-mon[74318]: 8.11 scrub starts
Jan 21 23:30:08 compute-0 ceph-mon[74318]: 8.11 scrub ok
Jan 21 23:30:08 compute-0 ceph-mon[74318]: 7.18 scrub starts
Jan 21 23:30:08 compute-0 ceph-mon[74318]: 7.18 scrub ok
Jan 21 23:30:08 compute-0 ceph-mon[74318]: pgmap v327: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:08 compute-0 sudo[108252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqvkdfsfrcfpjtqcytlczvjskofdcrmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038208.0349407-68-130532598691175/AnsiballZ_getent.py'
Jan 21 23:30:08 compute-0 sudo[108252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:08 compute-0 python3.9[108254]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 21 23:30:08 compute-0 sudo[108252]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:08.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:30:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:30:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:30:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:30:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:30:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:30:09 compute-0 ceph-mon[74318]: 8.16 scrub starts
Jan 21 23:30:09 compute-0 ceph-mon[74318]: 8.16 scrub ok
Jan 21 23:30:09 compute-0 sudo[108406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvxypgtdfnplhrytccgwjwlnlelhgxcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038209.1590216-104-204927456102706/AnsiballZ_setup.py'
Jan 21 23:30:09 compute-0 sudo[108406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:09 compute-0 python3.9[108408]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:30:10 compute-0 sudo[108406]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:10.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:10 compute-0 ceph-mon[74318]: 8.1f scrub starts
Jan 21 23:30:10 compute-0 ceph-mon[74318]: 8.1f scrub ok
Jan 21 23:30:10 compute-0 ceph-mon[74318]: pgmap v328: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:10 compute-0 sudo[108490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkhfugzxwptzdkfkmmjqyzbjbccnsfnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038209.1590216-104-204927456102706/AnsiballZ_dnf.py'
Jan 21 23:30:10 compute-0 sudo[108490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:10 compute-0 python3.9[108492]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 21 23:30:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:30:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:10.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:30:11 compute-0 ceph-mon[74318]: 10.1e deep-scrub starts
Jan 21 23:30:11 compute-0 ceph-mon[74318]: 10.1e deep-scrub ok
Jan 21 23:30:11 compute-0 ceph-mon[74318]: 7.e scrub starts
Jan 21 23:30:11 compute-0 ceph-mon[74318]: 7.e scrub ok
Jan 21 23:30:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:12 compute-0 sudo[108490]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:30:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:12.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:30:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:30:12 compute-0 ceph-mon[74318]: 7.11 deep-scrub starts
Jan 21 23:30:12 compute-0 ceph-mon[74318]: 7.11 deep-scrub ok
Jan 21 23:30:12 compute-0 ceph-mon[74318]: pgmap v329: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:12 compute-0 sudo[108644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unwkvbgzpkdcwredcegnrqvrnqkdupfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038212.5899143-146-115373699057837/AnsiballZ_dnf.py'
Jan 21 23:30:12 compute-0 sudo[108644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:12.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:13 compute-0 python3.9[108646]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:30:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:14 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 21 23:30:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:30:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:14.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:30:14 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 21 23:30:14 compute-0 sudo[108644]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:14 compute-0 ceph-mon[74318]: pgmap v330: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:14 compute-0 ceph-mon[74318]: 6.2 scrub starts
Jan 21 23:30:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:30:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:14.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:30:14 compute-0 sudo[108725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:30:14 compute-0 sudo[108725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:14 compute-0 sudo[108725]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:15 compute-0 sudo[108750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:30:15 compute-0 sudo[108750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:15 compute-0 sudo[108750]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:15 compute-0 sudo[108849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcmuhfrgazmocdguiruwphfqirthcsbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038214.7438598-170-12679218593975/AnsiballZ_systemd.py'
Jan 21 23:30:15 compute-0 sudo[108849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:15 compute-0 python3.9[108851]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 23:30:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:15 compute-0 sudo[108849]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:15 compute-0 ceph-mon[74318]: 6.2 scrub ok
Jan 21 23:30:15 compute-0 ceph-mon[74318]: 10.15 scrub starts
Jan 21 23:30:15 compute-0 ceph-mon[74318]: 10.15 scrub ok
Jan 21 23:30:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:30:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:16.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:30:16 compute-0 python3.9[109004]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:30:16 compute-0 ceph-mon[74318]: pgmap v331: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:16.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:30:17 compute-0 sudo[109155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kskpiivgjlsuatkvkaowirripglcwfcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038217.08852-224-199070463007385/AnsiballZ_sefcontext.py'
Jan 21 23:30:17 compute-0 sudo[109155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:17 compute-0 python3.9[109157]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 21 23:30:18 compute-0 sudo[109155]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:30:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:18.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:30:18 compute-0 ceph-mon[74318]: pgmap v332: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:30:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:18.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:30:19 compute-0 python3.9[109307]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:30:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:19 compute-0 sudo[109464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkbvpjdvllkuzbzlmwikpilabpuvsxyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038219.633545-278-63981971764223/AnsiballZ_dnf.py'
Jan 21 23:30:19 compute-0 sudo[109464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:30:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:20.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:30:20 compute-0 python3.9[109466]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:30:20 compute-0 ceph-mon[74318]: pgmap v333: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:20.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:21 compute-0 sudo[109464]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:21 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 21 23:30:21 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 21 23:30:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:30:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:22.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:30:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:30:22 compute-0 sudo[109618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsqmbdlektevfvoxkpawbgwqheksylbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038221.9653587-302-170604991824345/AnsiballZ_command.py'
Jan 21 23:30:22 compute-0 sudo[109618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:22 compute-0 python3.9[109620]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:30:22 compute-0 ceph-mon[74318]: 7.14 scrub starts
Jan 21 23:30:22 compute-0 ceph-mon[74318]: 7.14 scrub ok
Jan 21 23:30:22 compute-0 ceph-mon[74318]: pgmap v334: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:22 compute-0 ceph-mon[74318]: 6.a scrub starts
Jan 21 23:30:22 compute-0 ceph-mon[74318]: 6.a scrub ok
Jan 21 23:30:22 compute-0 ceph-mon[74318]: 7.1b scrub starts
Jan 21 23:30:22 compute-0 ceph-mon[74318]: 7.1b scrub ok
Jan 21 23:30:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:30:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:22.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:30:23 compute-0 sudo[109618]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:24.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:24 compute-0 sudo[109906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwdtqjphgweqhrzjjgraybmckzcafhbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038223.7331324-326-3404865447315/AnsiballZ_file.py'
Jan 21 23:30:24 compute-0 sudo[109906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:24 compute-0 python3.9[109908]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 21 23:30:24 compute-0 sudo[109906]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:24 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 21 23:30:24 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 21 23:30:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:24.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:24 compute-0 ceph-mon[74318]: pgmap v335: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:25 compute-0 python3.9[110059]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:30:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:25 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 21 23:30:25 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 21 23:30:25 compute-0 ceph-mon[74318]: 6.3 scrub starts
Jan 21 23:30:25 compute-0 ceph-mon[74318]: 6.3 scrub ok
Jan 21 23:30:26 compute-0 sudo[110211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-holyzokzljdzawxuzobbxygdzmdhyyzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038225.7156994-374-27991346669253/AnsiballZ_dnf.py'
Jan 21 23:30:26 compute-0 sudo[110211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:26.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:26 compute-0 python3.9[110213]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:30:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:26.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:26 compute-0 ceph-mon[74318]: pgmap v336: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:26 compute-0 ceph-mon[74318]: 6.7 scrub starts
Jan 21 23:30:26 compute-0 ceph-mon[74318]: 6.7 scrub ok
Jan 21 23:30:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:30:27 compute-0 sudo[110211]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:28 compute-0 ceph-mon[74318]: 7.5 scrub starts
Jan 21 23:30:28 compute-0 ceph-mon[74318]: 7.5 scrub ok
Jan 21 23:30:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:28.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:28 compute-0 sudo[110365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfxejwnlkdypjbuknjrzsmerfqehtwyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038227.9386213-401-238535129875697/AnsiballZ_dnf.py'
Jan 21 23:30:28 compute-0 sudo[110365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:30:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:28.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:30:29 compute-0 ceph-mon[74318]: pgmap v337: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:29 compute-0 python3.9[110367]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:30:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:29 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 21 23:30:29 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 21 23:30:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:30.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:30 compute-0 sudo[110365]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:30.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:31 compute-0 ceph-mon[74318]: 7.a scrub starts
Jan 21 23:30:31 compute-0 ceph-mon[74318]: 7.a scrub ok
Jan 21 23:30:31 compute-0 ceph-mon[74318]: pgmap v338: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:31 compute-0 ceph-mon[74318]: 6.5 scrub starts
Jan 21 23:30:31 compute-0 ceph-mon[74318]: 6.5 scrub ok
Jan 21 23:30:31 compute-0 sudo[110520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irkfiyspbzznwabkibtmxbstxqehvkri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038230.8201032-437-130361694638086/AnsiballZ_stat.py'
Jan 21 23:30:31 compute-0 sudo[110520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:31 compute-0 python3.9[110522]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:30:31 compute-0 sudo[110520]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:32 compute-0 sudo[110674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcxzlfnzvqrplkfriktmryfkikusscvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038231.6316862-461-185314682022643/AnsiballZ_slurp.py'
Jan 21 23:30:32 compute-0 sudo[110674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:32.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:30:32 compute-0 python3.9[110676]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 21 23:30:32 compute-0 sudo[110674]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:30:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:32.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:30:32 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 21 23:30:33 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 21 23:30:33 compute-0 ceph-mon[74318]: pgmap v339: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:33 compute-0 sshd-session[107866]: Connection closed by 192.168.122.30 port 50416
Jan 21 23:30:33 compute-0 sshd-session[107863]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:30:33 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Jan 21 23:30:33 compute-0 systemd[1]: session-36.scope: Consumed 19.632s CPU time.
Jan 21 23:30:33 compute-0 systemd-logind[786]: Session 36 logged out. Waiting for processes to exit.
Jan 21 23:30:33 compute-0 systemd-logind[786]: Removed session 36.
Jan 21 23:30:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:34 compute-0 ceph-mon[74318]: 10.4 scrub starts
Jan 21 23:30:34 compute-0 ceph-mon[74318]: 10.4 scrub ok
Jan 21 23:30:34 compute-0 ceph-mon[74318]: 6.d scrub starts
Jan 21 23:30:34 compute-0 ceph-mon[74318]: 6.d scrub ok
Jan 21 23:30:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:30:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:34.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:30:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:34.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:35 compute-0 ceph-mon[74318]: pgmap v340: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:35 compute-0 sudo[110702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:30:35 compute-0 sudo[110702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:35 compute-0 sudo[110702]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:35 compute-0 sudo[110728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:30:35 compute-0 sudo[110728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:35 compute-0 sudo[110728]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:36 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.e deep-scrub starts
Jan 21 23:30:36 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.e deep-scrub ok
Jan 21 23:30:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:36.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:36 compute-0 ceph-mon[74318]: 10.14 scrub starts
Jan 21 23:30:36 compute-0 ceph-mon[74318]: 10.14 scrub ok
Jan 21 23:30:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:36.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:37 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 21 23:30:37 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 21 23:30:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:30:37 compute-0 ceph-mon[74318]: pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:37 compute-0 ceph-mon[74318]: 9.e deep-scrub starts
Jan 21 23:30:37 compute-0 ceph-mon[74318]: 9.e deep-scrub ok
Jan 21 23:30:37 compute-0 ceph-mon[74318]: 5.19 scrub starts
Jan 21 23:30:37 compute-0 ceph-mon[74318]: 5.19 scrub ok
Jan 21 23:30:37 compute-0 ceph-mon[74318]: 9.6 scrub starts
Jan 21 23:30:37 compute-0 ceph-mon[74318]: 9.6 scrub ok
Jan 21 23:30:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:38 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 21 23:30:38 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 21 23:30:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:30:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:38.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:30:38 compute-0 sshd-session[110754]: Accepted publickey for zuul from 192.168.122.30 port 57402 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:30:38 compute-0 systemd-logind[786]: New session 37 of user zuul.
Jan 21 23:30:38 compute-0 systemd[1]: Started Session 37 of User zuul.
Jan 21 23:30:38 compute-0 sshd-session[110754]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:30:38 compute-0 ceph-mon[74318]: pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:38 compute-0 ceph-mon[74318]: 5.3 scrub starts
Jan 21 23:30:38 compute-0 ceph-mon[74318]: 6.8 scrub starts
Jan 21 23:30:38 compute-0 ceph-mon[74318]: 5.3 scrub ok
Jan 21 23:30:38 compute-0 ceph-mon[74318]: 6.8 scrub ok
Jan 21 23:30:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:30:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:38.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:30:39
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['default.rgw.control', 'volumes', '.mgr', 'backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.meta']
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:30:39 compute-0 python3.9[110907]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:30:39 compute-0 ceph-mon[74318]: 10.f scrub starts
Jan 21 23:30:39 compute-0 ceph-mon[74318]: 10.f scrub ok
Jan 21 23:30:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:40 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 21 23:30:40 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 21 23:30:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:30:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:40.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:30:40 compute-0 ceph-mon[74318]: 10.3 scrub starts
Jan 21 23:30:40 compute-0 ceph-mon[74318]: 10.3 scrub ok
Jan 21 23:30:40 compute-0 ceph-mon[74318]: pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:40 compute-0 ceph-mon[74318]: 9.a scrub starts
Jan 21 23:30:40 compute-0 ceph-mon[74318]: 9.a scrub ok
Jan 21 23:30:40 compute-0 python3.9[111062]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:30:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:40.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:41 compute-0 python3.9[111256]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:30:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:42.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:30:42 compute-0 sshd-session[110757]: Connection closed by 192.168.122.30 port 57402
Jan 21 23:30:42 compute-0 sshd-session[110754]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:30:42 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Jan 21 23:30:42 compute-0 systemd[1]: session-37.scope: Consumed 2.566s CPU time.
Jan 21 23:30:42 compute-0 systemd-logind[786]: Session 37 logged out. Waiting for processes to exit.
Jan 21 23:30:42 compute-0 systemd-logind[786]: Removed session 37.
Jan 21 23:30:42 compute-0 ceph-mon[74318]: pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:42.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:43 compute-0 ceph-mon[74318]: 5.6 scrub starts
Jan 21 23:30:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:44.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:44 compute-0 ceph-mon[74318]: 5.6 scrub ok
Jan 21 23:30:44 compute-0 ceph-mon[74318]: 9.b scrub starts
Jan 21 23:30:44 compute-0 ceph-mon[74318]: 9.b scrub ok
Jan 21 23:30:44 compute-0 ceph-mon[74318]: pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:44.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:45 compute-0 ceph-mon[74318]: 5.a scrub starts
Jan 21 23:30:45 compute-0 ceph-mon[74318]: 5.a scrub ok
Jan 21 23:30:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:30:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:46.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:30:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:46.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:47 compute-0 ceph-mon[74318]: 9.17 scrub starts
Jan 21 23:30:47 compute-0 ceph-mon[74318]: 9.17 scrub ok
Jan 21 23:30:47 compute-0 ceph-mon[74318]: pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:47 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 21 23:30:47 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 21 23:30:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:30:47 compute-0 sshd-session[111285]: Accepted publickey for zuul from 192.168.122.30 port 57418 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:30:47 compute-0 systemd-logind[786]: New session 38 of user zuul.
Jan 21 23:30:47 compute-0 systemd[1]: Started Session 38 of User zuul.
Jan 21 23:30:47 compute-0 sshd-session[111285]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:30:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:48 compute-0 ceph-mon[74318]: 6.e scrub starts
Jan 21 23:30:48 compute-0 ceph-mon[74318]: 6.e scrub ok
Jan 21 23:30:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 21 23:30:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:48.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 21 23:30:48 compute-0 python3.9[111438]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:30:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:48.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:49 compute-0 ceph-mon[74318]: pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:49 compute-0 ceph-mon[74318]: 5.c scrub starts
Jan 21 23:30:49 compute-0 ceph-mon[74318]: 5.c scrub ok
Jan 21 23:30:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:49 compute-0 python3.9[111593]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:30:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:30:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:50.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:30:50 compute-0 ceph-mon[74318]: 9.13 scrub starts
Jan 21 23:30:50 compute-0 ceph-mon[74318]: 9.13 scrub ok
Jan 21 23:30:50 compute-0 sudo[111747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxxtbwwzcczuzfsksfercbgtiejvknma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038250.3037055-80-99180733267586/AnsiballZ_setup.py'
Jan 21 23:30:50 compute-0 sudo[111747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:50 compute-0 python3.9[111749]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:30:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:30:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:50.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:30:51 compute-0 ceph-mon[74318]: pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:51 compute-0 sudo[111747]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:51 compute-0 sudo[111832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wijhwssgsylvvsfagcuypjfuuvrpuvmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038250.3037055-80-99180733267586/AnsiballZ_dnf.py'
Jan 21 23:30:51 compute-0 sudo[111832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:52 compute-0 python3.9[111834]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:30:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:52.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:30:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:53.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:53 compute-0 ceph-mon[74318]: pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:53 compute-0 sudo[111832]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:30:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:30:53 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 21 23:30:53 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 21 23:30:53 compute-0 sudo[111986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdmdgbnuibrwfculqdywjoidpxohmpht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038253.6357222-116-41062729763254/AnsiballZ_setup.py'
Jan 21 23:30:53 compute-0 sudo[111986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 21 23:30:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:54.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 21 23:30:54 compute-0 ceph-mon[74318]: 9.d scrub starts
Jan 21 23:30:54 compute-0 ceph-mon[74318]: 9.d scrub ok
Jan 21 23:30:54 compute-0 python3.9[111988]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:30:54 compute-0 sudo[111986]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:30:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:55.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:30:55 compute-0 ceph-mon[74318]: pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:55 compute-0 ceph-mon[74318]: 5.17 scrub starts
Jan 21 23:30:55 compute-0 ceph-mon[74318]: 5.17 scrub ok
Jan 21 23:30:55 compute-0 sudo[112109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:30:55 compute-0 sudo[112109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:55 compute-0 sudo[112109]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:55 compute-0 sudo[112134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:30:55 compute-0 sudo[112134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:30:55 compute-0 sudo[112134]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:55 compute-0 sudo[112232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkcpqohyihoyggjflmyqlkgppxwzkzqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038255.0836465-149-225476149720496/AnsiballZ_file.py'
Jan 21 23:30:55 compute-0 sudo[112232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:55 compute-0 python3.9[112234]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:30:55 compute-0 sudo[112232]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:30:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:56.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:30:56 compute-0 ceph-mon[74318]: pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:56 compute-0 sudo[112384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsmucigoaffrznlsrifosgkcveavarej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038256.0071552-173-6769022174133/AnsiballZ_command.py'
Jan 21 23:30:56 compute-0 sudo[112384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:56 compute-0 python3.9[112386]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:30:56 compute-0 sudo[112384]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:30:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:57.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:30:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:30:57 compute-0 ceph-mon[74318]: 5.1e scrub starts
Jan 21 23:30:57 compute-0 ceph-mon[74318]: 5.1e scrub ok
Jan 21 23:30:57 compute-0 ceph-mon[74318]: 5.14 scrub starts
Jan 21 23:30:57 compute-0 sudo[112550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioqkgfzvpcgiwfihjraawjwhkaptlktc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038257.0302284-197-281231086118574/AnsiballZ_stat.py'
Jan 21 23:30:57 compute-0 sudo[112550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:57 compute-0 python3.9[112552]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:30:57 compute-0 sudo[112550]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:57 compute-0 sudo[112628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huvyqnxpnxyenbsozrkbfxkxyyetfaca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038257.0302284-197-281231086118574/AnsiballZ_file.py'
Jan 21 23:30:57 compute-0 sudo[112628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:58 compute-0 python3.9[112630]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:30:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 21 23:30:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:30:58.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 21 23:30:58 compute-0 sudo[112628]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:58 compute-0 ceph-mon[74318]: 5.14 scrub ok
Jan 21 23:30:58 compute-0 ceph-mon[74318]: pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:30:58 compute-0 ceph-mon[74318]: 5.5 scrub starts
Jan 21 23:30:58 compute-0 sudo[112780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzqmbnfulxuldzbhhvvhodpdeuovrvoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038258.4238365-233-142544244324423/AnsiballZ_stat.py'
Jan 21 23:30:58 compute-0 sudo[112780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:58 compute-0 python3.9[112782]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:30:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:30:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:30:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:30:59.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:30:59 compute-0 sudo[112780]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:59 compute-0 sudo[112859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amrhidmtvbfrdnlcpchudcwopuqftthn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038258.4238365-233-142544244324423/AnsiballZ_file.py'
Jan 21 23:30:59 compute-0 sudo[112859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:30:59 compute-0 ceph-mon[74318]: 5.5 scrub ok
Jan 21 23:30:59 compute-0 python3.9[112861]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:30:59 compute-0 sudo[112859]: pam_unix(sudo:session): session closed for user root
Jan 21 23:30:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:00.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:00 compute-0 ceph-mon[74318]: 9.7 scrub starts
Jan 21 23:31:00 compute-0 ceph-mon[74318]: 9.7 scrub ok
Jan 21 23:31:00 compute-0 ceph-mon[74318]: pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:00 compute-0 ceph-mon[74318]: 5.1d scrub starts
Jan 21 23:31:00 compute-0 ceph-mon[74318]: 5.1d scrub ok
Jan 21 23:31:00 compute-0 sudo[113011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsytrhtbkopwjcwezvkvfefsvenizdaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038259.9108295-272-153301447832288/AnsiballZ_ini_file.py'
Jan 21 23:31:00 compute-0 sudo[113011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:00 compute-0 python3.9[113013]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:31:00 compute-0 sudo[113011]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:01.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:01 compute-0 sudo[113164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kklgodgzafswiredytaadrtadnckoxwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038260.8535264-272-250950497295704/AnsiballZ_ini_file.py'
Jan 21 23:31:01 compute-0 sudo[113164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:01 compute-0 python3.9[113166]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:31:01 compute-0 sudo[113164]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:01 compute-0 ceph-mon[74318]: 9.3 scrub starts
Jan 21 23:31:01 compute-0 ceph-mon[74318]: 9.3 scrub ok
Jan 21 23:31:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:02 compute-0 sudo[113316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxturzlyzhlkcumxqblkzgauwskrhzvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038261.4832907-272-160410317152807/AnsiballZ_ini_file.py'
Jan 21 23:31:02 compute-0 sudo[113316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:02 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 21 23:31:02 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 21 23:31:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:02.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:02 compute-0 python3.9[113318]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:31:02 compute-0 sudo[113316]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:31:02 compute-0 ceph-mon[74318]: pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:02 compute-0 sudo[113468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xskdjzebhjruzvnsexjfabfvrmohdjhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038262.3886712-272-144655770905377/AnsiballZ_ini_file.py'
Jan 21 23:31:02 compute-0 sudo[113468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:02 compute-0 python3.9[113470]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:31:02 compute-0 sudo[113468]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:03.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:03 compute-0 ceph-mon[74318]: 9.f scrub starts
Jan 21 23:31:03 compute-0 ceph-mon[74318]: 9.f scrub ok
Jan 21 23:31:03 compute-0 ceph-mon[74318]: 6.6 deep-scrub starts
Jan 21 23:31:03 compute-0 sudo[113621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbnsxfxmokpxpzmujrqyxacbgshxwrrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038263.30632-365-255773945154341/AnsiballZ_dnf.py'
Jan 21 23:31:03 compute-0 sudo[113621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:03 compute-0 python3.9[113623]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:31:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:31:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:04.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:31:04 compute-0 ceph-mon[74318]: 6.6 deep-scrub ok
Jan 21 23:31:04 compute-0 ceph-mon[74318]: pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:05.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:05 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 21 23:31:05 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 21 23:31:05 compute-0 sudo[113621]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:05 compute-0 ceph-mon[74318]: 9.10 scrub starts
Jan 21 23:31:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:06 compute-0 sudo[113775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmfcbgvrsqmydmxnhsghlidilbvfyodz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038265.818542-398-101916721110405/AnsiballZ_setup.py'
Jan 21 23:31:06 compute-0 sudo[113775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:06.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:06 compute-0 python3.9[113777]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:31:06 compute-0 ceph-mon[74318]: 9.10 scrub ok
Jan 21 23:31:06 compute-0 ceph-mon[74318]: pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:06 compute-0 sudo[113775]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:07.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:07 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 21 23:31:07 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 21 23:31:07 compute-0 sudo[113929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzbzfynpwvcgpcjkjgmpiwhndbftpikt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038266.7376988-422-38768507163563/AnsiballZ_stat.py'
Jan 21 23:31:07 compute-0 sudo[113929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:31:07 compute-0 python3.9[113931]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:31:07 compute-0 sudo[113933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:31:07 compute-0 sudo[113933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:07 compute-0 sudo[113933]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:07 compute-0 sudo[113929]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:07 compute-0 ceph-mon[74318]: 9.5 scrub starts
Jan 21 23:31:07 compute-0 ceph-mon[74318]: 9.5 scrub ok
Jan 21 23:31:07 compute-0 ceph-mon[74318]: 9.11 scrub starts
Jan 21 23:31:07 compute-0 ceph-mon[74318]: 9.11 scrub ok
Jan 21 23:31:07 compute-0 sudo[113958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:31:07 compute-0 sudo[113958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:07 compute-0 sudo[113958]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:07 compute-0 sudo[114007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:31:07 compute-0 sudo[114007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:07 compute-0 sudo[114007]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:07 compute-0 sudo[114032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:31:07 compute-0 sudo[114032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:08 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 21 23:31:08 compute-0 sudo[114201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwplejzpefqgasnwglmosjpwojufzifx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038267.709548-449-27185086970474/AnsiballZ_stat.py'
Jan 21 23:31:08 compute-0 sudo[114201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:08 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 21 23:31:08 compute-0 sudo[114032]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:08.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:08 compute-0 python3.9[114203]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:31:08 compute-0 sudo[114201]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:08 compute-0 ceph-mon[74318]: pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:08 compute-0 ceph-mon[74318]: 9.12 scrub starts
Jan 21 23:31:08 compute-0 ceph-mon[74318]: 9.12 scrub ok
Jan 21 23:31:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:31:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:09.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:31:09 compute-0 sudo[114365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zixtwzmelpzjdlpxtiobtypoxulrknuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038268.6685088-479-93249033668422/AnsiballZ_command.py'
Jan 21 23:31:09 compute-0 sudo[114365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:31:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:31:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:31:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:31:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:31:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:31:09 compute-0 python3.9[114367]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:31:09 compute-0 sudo[114365]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:31:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:31:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:31:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:31:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:10 compute-0 sudo[114519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmtfkbrupeijipdkrgzhqcikmxudupqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038269.6082723-509-160331986570529/AnsiballZ_service_facts.py'
Jan 21 23:31:10 compute-0 sudo[114519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 21 23:31:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:10.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 21 23:31:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:31:10 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:31:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:31:10 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:31:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:31:10 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:31:10 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev e065442b-b81f-48e1-b5c0-c225b12302b3 does not exist
Jan 21 23:31:10 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev c2def801-63b1-4dac-8ecf-90f3959d0bb8 does not exist
Jan 21 23:31:10 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 5885a80a-be9f-42a0-b7f4-0163bafde965 does not exist
Jan 21 23:31:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:31:10 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:31:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:31:10 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:31:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:31:10 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:31:10 compute-0 python3.9[114521]: ansible-service_facts Invoked
Jan 21 23:31:10 compute-0 sudo[114522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:31:10 compute-0 sudo[114522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:10 compute-0 sudo[114522]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:10 compute-0 network[114581]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 23:31:10 compute-0 network[114587]: 'network-scripts' will be removed from distribution in near future.
Jan 21 23:31:10 compute-0 network[114588]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 23:31:10 compute-0 sudo[114550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:31:10 compute-0 sudo[114550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:10 compute-0 sudo[114550]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:31:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:31:10 compute-0 ceph-mon[74318]: 9.18 scrub starts
Jan 21 23:31:10 compute-0 ceph-mon[74318]: 9.18 scrub ok
Jan 21 23:31:10 compute-0 ceph-mon[74318]: pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:31:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:31:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:31:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:31:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:31:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:31:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:11.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:11 compute-0 sudo[114596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:31:11 compute-0 sudo[114596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:11 compute-0 sudo[114596]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:11 compute-0 sudo[114622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:31:11 compute-0 sudo[114622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:11 compute-0 podman[114702]: 2026-01-21 23:31:11.507892161 +0000 UTC m=+0.048445279 container create a86c8ee35b1ec003361e69976c4b2b79101aefeec36f313877330ffcf1c62e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rubin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:31:11 compute-0 systemd[1]: Started libpod-conmon-a86c8ee35b1ec003361e69976c4b2b79101aefeec36f313877330ffcf1c62e02.scope.
Jan 21 23:31:11 compute-0 podman[114702]: 2026-01-21 23:31:11.485546491 +0000 UTC m=+0.026099609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:31:11 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:31:11 compute-0 podman[114702]: 2026-01-21 23:31:11.617412317 +0000 UTC m=+0.157965415 container init a86c8ee35b1ec003361e69976c4b2b79101aefeec36f313877330ffcf1c62e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Jan 21 23:31:11 compute-0 podman[114702]: 2026-01-21 23:31:11.626943835 +0000 UTC m=+0.167496923 container start a86c8ee35b1ec003361e69976c4b2b79101aefeec36f313877330ffcf1c62e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rubin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 21 23:31:11 compute-0 podman[114702]: 2026-01-21 23:31:11.630576789 +0000 UTC m=+0.171129917 container attach a86c8ee35b1ec003361e69976c4b2b79101aefeec36f313877330ffcf1c62e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 21 23:31:11 compute-0 systemd[1]: libpod-a86c8ee35b1ec003361e69976c4b2b79101aefeec36f313877330ffcf1c62e02.scope: Deactivated successfully.
Jan 21 23:31:11 compute-0 gracious_rubin[114721]: 167 167
Jan 21 23:31:11 compute-0 conmon[114721]: conmon a86c8ee35b1ec003361e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a86c8ee35b1ec003361e69976c4b2b79101aefeec36f313877330ffcf1c62e02.scope/container/memory.events
Jan 21 23:31:11 compute-0 podman[114702]: 2026-01-21 23:31:11.636097052 +0000 UTC m=+0.176650140 container died a86c8ee35b1ec003361e69976c4b2b79101aefeec36f313877330ffcf1c62e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rubin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 23:31:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b2e67afc763b01b971d31ea3a4b1546a82c647f9197c686a0cbbad028ca0cc2-merged.mount: Deactivated successfully.
Jan 21 23:31:11 compute-0 podman[114702]: 2026-01-21 23:31:11.677833017 +0000 UTC m=+0.218386095 container remove a86c8ee35b1ec003361e69976c4b2b79101aefeec36f313877330ffcf1c62e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:31:11 compute-0 systemd[1]: libpod-conmon-a86c8ee35b1ec003361e69976c4b2b79101aefeec36f313877330ffcf1c62e02.scope: Deactivated successfully.
Jan 21 23:31:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:11 compute-0 podman[114757]: 2026-01-21 23:31:11.877476284 +0000 UTC m=+0.055318168 container create c48b0219722a12afa74a190e2c5a8b7f3ea6e783d37a33d86442b97954ca6834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_spence, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 23:31:11 compute-0 systemd[1]: Started libpod-conmon-c48b0219722a12afa74a190e2c5a8b7f3ea6e783d37a33d86442b97954ca6834.scope.
Jan 21 23:31:11 compute-0 podman[114757]: 2026-01-21 23:31:11.852507035 +0000 UTC m=+0.030348949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:31:11 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3763b6edc5aaf3e68e389d0084ea4ad771895a2b4ef96b52d99e68a834522a62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3763b6edc5aaf3e68e389d0084ea4ad771895a2b4ef96b52d99e68a834522a62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3763b6edc5aaf3e68e389d0084ea4ad771895a2b4ef96b52d99e68a834522a62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3763b6edc5aaf3e68e389d0084ea4ad771895a2b4ef96b52d99e68a834522a62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3763b6edc5aaf3e68e389d0084ea4ad771895a2b4ef96b52d99e68a834522a62/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:31:11 compute-0 podman[114757]: 2026-01-21 23:31:11.987119353 +0000 UTC m=+0.164961337 container init c48b0219722a12afa74a190e2c5a8b7f3ea6e783d37a33d86442b97954ca6834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:31:12 compute-0 podman[114757]: 2026-01-21 23:31:12.003897139 +0000 UTC m=+0.181739013 container start c48b0219722a12afa74a190e2c5a8b7f3ea6e783d37a33d86442b97954ca6834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 23:31:12 compute-0 podman[114757]: 2026-01-21 23:31:12.007545884 +0000 UTC m=+0.185387798 container attach c48b0219722a12afa74a190e2c5a8b7f3ea6e783d37a33d86442b97954ca6834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_spence, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:31:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:31:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:12.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:31:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:31:12 compute-0 funny_spence[114777]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:31:12 compute-0 funny_spence[114777]: --> relative data size: 1.0
Jan 21 23:31:12 compute-0 funny_spence[114777]: --> All data devices are unavailable
Jan 21 23:31:12 compute-0 systemd[1]: libpod-c48b0219722a12afa74a190e2c5a8b7f3ea6e783d37a33d86442b97954ca6834.scope: Deactivated successfully.
Jan 21 23:31:12 compute-0 podman[114757]: 2026-01-21 23:31:12.919051796 +0000 UTC m=+1.096893750 container died c48b0219722a12afa74a190e2c5a8b7f3ea6e783d37a33d86442b97954ca6834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:31:12 compute-0 ceph-mon[74318]: 6.9 scrub starts
Jan 21 23:31:12 compute-0 ceph-mon[74318]: 6.9 scrub ok
Jan 21 23:31:12 compute-0 ceph-mon[74318]: 9.8 scrub starts
Jan 21 23:31:12 compute-0 ceph-mon[74318]: 9.8 scrub ok
Jan 21 23:31:12 compute-0 ceph-mon[74318]: pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3763b6edc5aaf3e68e389d0084ea4ad771895a2b4ef96b52d99e68a834522a62-merged.mount: Deactivated successfully.
Jan 21 23:31:12 compute-0 podman[114757]: 2026-01-21 23:31:12.987109364 +0000 UTC m=+1.164951248 container remove c48b0219722a12afa74a190e2c5a8b7f3ea6e783d37a33d86442b97954ca6834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_spence, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 21 23:31:12 compute-0 systemd[1]: libpod-conmon-c48b0219722a12afa74a190e2c5a8b7f3ea6e783d37a33d86442b97954ca6834.scope: Deactivated successfully.
Jan 21 23:31:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:13 compute-0 sudo[114622]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:13.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:13 compute-0 sudo[114832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:31:13 compute-0 sudo[114832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:13 compute-0 sudo[114832]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:13 compute-0 sudo[114857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:31:13 compute-0 sudo[114857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:13 compute-0 sudo[114857]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:13 compute-0 sudo[114883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:31:13 compute-0 sudo[114883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:13 compute-0 sudo[114883]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:13 compute-0 sudo[114908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:31:13 compute-0 sudo[114908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:13 compute-0 podman[114973]: 2026-01-21 23:31:13.820433175 +0000 UTC m=+0.065088642 container create f1c45854d251d84c6f40bb48424a102fea0242d956e0a46235407e64dbbdef5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ramanujan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 21 23:31:13 compute-0 systemd[1]: Started libpod-conmon-f1c45854d251d84c6f40bb48424a102fea0242d956e0a46235407e64dbbdef5b.scope.
Jan 21 23:31:13 compute-0 podman[114973]: 2026-01-21 23:31:13.797933981 +0000 UTC m=+0.042589478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:31:13 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:31:13 compute-0 podman[114973]: 2026-01-21 23:31:13.9229816 +0000 UTC m=+0.167637067 container init f1c45854d251d84c6f40bb48424a102fea0242d956e0a46235407e64dbbdef5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 21 23:31:13 compute-0 podman[114973]: 2026-01-21 23:31:13.931725697 +0000 UTC m=+0.176381154 container start f1c45854d251d84c6f40bb48424a102fea0242d956e0a46235407e64dbbdef5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 21 23:31:13 compute-0 podman[114973]: 2026-01-21 23:31:13.935546216 +0000 UTC m=+0.180201673 container attach f1c45854d251d84c6f40bb48424a102fea0242d956e0a46235407e64dbbdef5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ramanujan, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 23:31:13 compute-0 epic_ramanujan[114992]: 167 167
Jan 21 23:31:13 compute-0 podman[114973]: 2026-01-21 23:31:13.93798773 +0000 UTC m=+0.182643197 container died f1c45854d251d84c6f40bb48424a102fea0242d956e0a46235407e64dbbdef5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ramanujan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:31:13 compute-0 systemd[1]: libpod-f1c45854d251d84c6f40bb48424a102fea0242d956e0a46235407e64dbbdef5b.scope: Deactivated successfully.
Jan 21 23:31:13 compute-0 ceph-mon[74318]: 6.b scrub starts
Jan 21 23:31:13 compute-0 ceph-mon[74318]: 6.b scrub ok
Jan 21 23:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-78dc09d08bd7a1724dca1428e3e221fcc5a8e73fb69ea1686b92ccdefbe399b6-merged.mount: Deactivated successfully.
Jan 21 23:31:13 compute-0 podman[114973]: 2026-01-21 23:31:13.983356489 +0000 UTC m=+0.228011986 container remove f1c45854d251d84c6f40bb48424a102fea0242d956e0a46235407e64dbbdef5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 21 23:31:14 compute-0 systemd[1]: libpod-conmon-f1c45854d251d84c6f40bb48424a102fea0242d956e0a46235407e64dbbdef5b.scope: Deactivated successfully.
Jan 21 23:31:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:14.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:14 compute-0 podman[115025]: 2026-01-21 23:31:14.203777845 +0000 UTC m=+0.067389262 container create 80a100171eb00d84c2f815214e7efb37c7e5152962d07a8db7e6f6f30fe9bcd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_fermat, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:31:14 compute-0 podman[115025]: 2026-01-21 23:31:14.165388068 +0000 UTC m=+0.028999495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:31:14 compute-0 systemd[1]: Started libpod-conmon-80a100171eb00d84c2f815214e7efb37c7e5152962d07a8db7e6f6f30fe9bcd0.scope.
Jan 21 23:31:14 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1cd4f3d0e97fa8d65bff0921ef0b28e5191ae470f6f2499ff26b4feaf494d10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1cd4f3d0e97fa8d65bff0921ef0b28e5191ae470f6f2499ff26b4feaf494d10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1cd4f3d0e97fa8d65bff0921ef0b28e5191ae470f6f2499ff26b4feaf494d10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1cd4f3d0e97fa8d65bff0921ef0b28e5191ae470f6f2499ff26b4feaf494d10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:31:14 compute-0 podman[115025]: 2026-01-21 23:31:14.343735012 +0000 UTC m=+0.207346419 container init 80a100171eb00d84c2f815214e7efb37c7e5152962d07a8db7e6f6f30fe9bcd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 21 23:31:14 compute-0 podman[115025]: 2026-01-21 23:31:14.354485141 +0000 UTC m=+0.218096528 container start 80a100171eb00d84c2f815214e7efb37c7e5152962d07a8db7e6f6f30fe9bcd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:31:14 compute-0 podman[115025]: 2026-01-21 23:31:14.357282844 +0000 UTC m=+0.220894231 container attach 80a100171eb00d84c2f815214e7efb37c7e5152962d07a8db7e6f6f30fe9bcd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_fermat, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:31:14 compute-0 sudo[114519]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:14 compute-0 ceph-mon[74318]: 6.f scrub starts
Jan 21 23:31:14 compute-0 ceph-mon[74318]: 6.f scrub ok
Jan 21 23:31:14 compute-0 ceph-mon[74318]: pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:15.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:15 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 21 23:31:15 compute-0 ceph-osd[84656]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]: {
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:     "1": [
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:         {
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:             "devices": [
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:                 "/dev/loop3"
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:             ],
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:             "lv_name": "ceph_lv0",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:             "lv_size": "7511998464",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:             "name": "ceph_lv0",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:             "tags": {
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:                 "ceph.cluster_name": "ceph",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:                 "ceph.crush_device_class": "",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:                 "ceph.encrypted": "0",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:                 "ceph.osd_id": "1",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:                 "ceph.type": "block",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:                 "ceph.vdo": "0"
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:             },
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:             "type": "block",
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:             "vg_name": "ceph_vg0"
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:         }
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]:     ]
Jan 21 23:31:15 compute-0 wonderful_fermat[115047]: }
Jan 21 23:31:15 compute-0 systemd[1]: libpod-80a100171eb00d84c2f815214e7efb37c7e5152962d07a8db7e6f6f30fe9bcd0.scope: Deactivated successfully.
Jan 21 23:31:15 compute-0 podman[115025]: 2026-01-21 23:31:15.169005544 +0000 UTC m=+1.032616961 container died 80a100171eb00d84c2f815214e7efb37c7e5152962d07a8db7e6f6f30fe9bcd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_fermat, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 21 23:31:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1cd4f3d0e97fa8d65bff0921ef0b28e5191ae470f6f2499ff26b4feaf494d10-merged.mount: Deactivated successfully.
Jan 21 23:31:15 compute-0 podman[115025]: 2026-01-21 23:31:15.238452429 +0000 UTC m=+1.102063826 container remove 80a100171eb00d84c2f815214e7efb37c7e5152962d07a8db7e6f6f30fe9bcd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_fermat, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:31:15 compute-0 systemd[1]: libpod-conmon-80a100171eb00d84c2f815214e7efb37c7e5152962d07a8db7e6f6f30fe9bcd0.scope: Deactivated successfully.
Jan 21 23:31:15 compute-0 sudo[114908]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:15 compute-0 sudo[115112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:31:15 compute-0 sudo[115112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:15 compute-0 sudo[115112]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:15 compute-0 sudo[115143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:31:15 compute-0 sudo[115137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:31:15 compute-0 sudo[115143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:15 compute-0 sudo[115137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:15 compute-0 sudo[115143]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:15 compute-0 sudo[115137]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:15 compute-0 sudo[115188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:31:15 compute-0 sudo[115188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:15 compute-0 sudo[115187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:31:15 compute-0 sudo[115188]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:15 compute-0 sudo[115187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:15 compute-0 sudo[115187]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:15 compute-0 sudo[115237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:31:15 compute-0 sudo[115237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:15 compute-0 podman[115302]: 2026-01-21 23:31:15.919818221 +0000 UTC m=+0.059266611 container create 79f1388c53fa0151ea8ee602e6709b83d657c18c0c28d9576922f7793473603e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rhodes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 21 23:31:15 compute-0 systemd[1]: Started libpod-conmon-79f1388c53fa0151ea8ee602e6709b83d657c18c0c28d9576922f7793473603e.scope.
Jan 21 23:31:15 compute-0 podman[115302]: 2026-01-21 23:31:15.89242753 +0000 UTC m=+0.031875940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:31:15 compute-0 ceph-mon[74318]: 9.9 scrub starts
Jan 21 23:31:15 compute-0 ceph-mon[74318]: 9.9 scrub ok
Jan 21 23:31:15 compute-0 ceph-mon[74318]: 9.15 scrub starts
Jan 21 23:31:15 compute-0 ceph-mon[74318]: 9.15 scrub ok
Jan 21 23:31:15 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:31:16 compute-0 podman[115302]: 2026-01-21 23:31:16.019132022 +0000 UTC m=+0.158580422 container init 79f1388c53fa0151ea8ee602e6709b83d657c18c0c28d9576922f7793473603e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rhodes, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:31:16 compute-0 podman[115302]: 2026-01-21 23:31:16.030424595 +0000 UTC m=+0.169872985 container start 79f1388c53fa0151ea8ee602e6709b83d657c18c0c28d9576922f7793473603e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 21 23:31:16 compute-0 podman[115302]: 2026-01-21 23:31:16.034743157 +0000 UTC m=+0.174191517 container attach 79f1388c53fa0151ea8ee602e6709b83d657c18c0c28d9576922f7793473603e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rhodes, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 21 23:31:16 compute-0 systemd[1]: libpod-79f1388c53fa0151ea8ee602e6709b83d657c18c0c28d9576922f7793473603e.scope: Deactivated successfully.
Jan 21 23:31:16 compute-0 heuristic_rhodes[115319]: 167 167
Jan 21 23:31:16 compute-0 conmon[115319]: conmon 79f1388c53fa0151ea8e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79f1388c53fa0151ea8ee602e6709b83d657c18c0c28d9576922f7793473603e.scope/container/memory.events
Jan 21 23:31:16 compute-0 podman[115302]: 2026-01-21 23:31:16.039369188 +0000 UTC m=+0.178817548 container died 79f1388c53fa0151ea8ee602e6709b83d657c18c0c28d9576922f7793473603e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:31:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4c9efe9fb614b7631cfe15b022e236520758d1825741b35ddd377950749a97b-merged.mount: Deactivated successfully.
Jan 21 23:31:16 compute-0 podman[115302]: 2026-01-21 23:31:16.08372586 +0000 UTC m=+0.223174250 container remove 79f1388c53fa0151ea8ee602e6709b83d657c18c0c28d9576922f7793473603e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 23:31:16 compute-0 systemd[1]: libpod-conmon-79f1388c53fa0151ea8ee602e6709b83d657c18c0c28d9576922f7793473603e.scope: Deactivated successfully.
Jan 21 23:31:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:16.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:16 compute-0 podman[115342]: 2026-01-21 23:31:16.297282318 +0000 UTC m=+0.061321274 container create dcb73e0f493019dc59f1d7dd266f811f34340e75ef2431390ff370e28f2d883e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:31:16 compute-0 systemd[1]: Started libpod-conmon-dcb73e0f493019dc59f1d7dd266f811f34340e75ef2431390ff370e28f2d883e.scope.
Jan 21 23:31:16 compute-0 podman[115342]: 2026-01-21 23:31:16.274705383 +0000 UTC m=+0.038744349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:31:16 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5191c5c7571edf1e044c8b029aa638abc855626700f20dca9270bc93b49dc6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5191c5c7571edf1e044c8b029aa638abc855626700f20dca9270bc93b49dc6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5191c5c7571edf1e044c8b029aa638abc855626700f20dca9270bc93b49dc6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5191c5c7571edf1e044c8b029aa638abc855626700f20dca9270bc93b49dc6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:31:16 compute-0 podman[115342]: 2026-01-21 23:31:16.395264194 +0000 UTC m=+0.159303140 container init dcb73e0f493019dc59f1d7dd266f811f34340e75ef2431390ff370e28f2d883e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 23:31:16 compute-0 podman[115342]: 2026-01-21 23:31:16.407756179 +0000 UTC m=+0.171795105 container start dcb73e0f493019dc59f1d7dd266f811f34340e75ef2431390ff370e28f2d883e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:31:16 compute-0 podman[115342]: 2026-01-21 23:31:16.411483486 +0000 UTC m=+0.175522442 container attach dcb73e0f493019dc59f1d7dd266f811f34340e75ef2431390ff370e28f2d883e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 21 23:31:16 compute-0 sudo[115511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmmpgkrinuldvmqgjzhbduudispjpdfd ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769038276.333646-554-178529810995196/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769038276.333646-554-178529810995196/args'
Jan 21 23:31:16 compute-0 sudo[115511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:16 compute-0 sudo[115511]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:17 compute-0 ceph-mon[74318]: 9.19 deep-scrub starts
Jan 21 23:31:17 compute-0 ceph-mon[74318]: 9.19 deep-scrub ok
Jan 21 23:31:17 compute-0 ceph-mon[74318]: pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:17.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:31:17 compute-0 trusting_carson[115381]: {
Jan 21 23:31:17 compute-0 trusting_carson[115381]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:31:17 compute-0 trusting_carson[115381]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:31:17 compute-0 trusting_carson[115381]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:31:17 compute-0 trusting_carson[115381]:         "osd_id": 1,
Jan 21 23:31:17 compute-0 trusting_carson[115381]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:31:17 compute-0 trusting_carson[115381]:         "type": "bluestore"
Jan 21 23:31:17 compute-0 trusting_carson[115381]:     }
Jan 21 23:31:17 compute-0 trusting_carson[115381]: }
Jan 21 23:31:17 compute-0 systemd[1]: libpod-dcb73e0f493019dc59f1d7dd266f811f34340e75ef2431390ff370e28f2d883e.scope: Deactivated successfully.
Jan 21 23:31:17 compute-0 podman[115342]: 2026-01-21 23:31:17.33803608 +0000 UTC m=+1.102075046 container died dcb73e0f493019dc59f1d7dd266f811f34340e75ef2431390ff370e28f2d883e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 23:31:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5191c5c7571edf1e044c8b029aa638abc855626700f20dca9270bc93b49dc6c-merged.mount: Deactivated successfully.
Jan 21 23:31:17 compute-0 podman[115342]: 2026-01-21 23:31:17.409817025 +0000 UTC m=+1.173855951 container remove dcb73e0f493019dc59f1d7dd266f811f34340e75ef2431390ff370e28f2d883e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 21 23:31:17 compute-0 systemd[1]: libpod-conmon-dcb73e0f493019dc59f1d7dd266f811f34340e75ef2431390ff370e28f2d883e.scope: Deactivated successfully.
Jan 21 23:31:17 compute-0 sudo[115237]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:31:17 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:31:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:31:17 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:31:17 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 1c9f06ad-4c24-4c7e-ac90-6af7c5b10694 does not exist
Jan 21 23:31:17 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev e51674d3-0f0e-4193-80f9-8918777cac05 does not exist
Jan 21 23:31:17 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 02b571a3-0e48-4ea8-b2c9-cc88d86085d1 does not exist
Jan 21 23:31:17 compute-0 sudo[115725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cycqvozwtrynfbdjmolaunhkuugospjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038277.2233598-587-130194368753691/AnsiballZ_dnf.py'
Jan 21 23:31:17 compute-0 sudo[115725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:17 compute-0 sudo[115688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:31:17 compute-0 sudo[115688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:17 compute-0 sudo[115688]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:17 compute-0 sudo[115735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:31:17 compute-0 sudo[115735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:17 compute-0 sudo[115735]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:17 compute-0 python3.9[115732]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:31:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:31:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:18.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:31:18 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:31:18 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:31:18 compute-0 ceph-mon[74318]: pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:19.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:19 compute-0 sudo[115725]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:19 compute-0 ceph-mon[74318]: 9.16 scrub starts
Jan 21 23:31:19 compute-0 ceph-mon[74318]: 9.16 scrub ok
Jan 21 23:31:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 21 23:31:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:20.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 21 23:31:20 compute-0 sudo[115911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgjovpbrvomsveqbdfiercddhgbjiats ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038279.6894116-626-203008408726448/AnsiballZ_package_facts.py'
Jan 21 23:31:20 compute-0 sudo[115911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:20 compute-0 ceph-mon[74318]: 9.1b scrub starts
Jan 21 23:31:20 compute-0 ceph-mon[74318]: 9.1b scrub ok
Jan 21 23:31:20 compute-0 ceph-mon[74318]: pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:20 compute-0 python3.9[115913]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 21 23:31:20 compute-0 sudo[115911]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:21.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:21 compute-0 ceph-mon[74318]: 9.1a scrub starts
Jan 21 23:31:21 compute-0 ceph-mon[74318]: 9.1a scrub ok
Jan 21 23:31:21 compute-0 ceph-mon[74318]: 9.1d scrub starts
Jan 21 23:31:21 compute-0 ceph-mon[74318]: 9.1d scrub ok
Jan 21 23:31:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:22 compute-0 sudo[116064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhgbxjbuupbrqqkzbietcksbrgkcjpds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038281.6274395-656-233010430835203/AnsiballZ_stat.py'
Jan 21 23:31:22 compute-0 sudo[116064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:22.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:22 compute-0 python3.9[116066]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:31:22 compute-0 sudo[116064]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:31:22 compute-0 sudo[116142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdnafhjvfqivxbeupuoxscfcthfmozlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038281.6274395-656-233010430835203/AnsiballZ_file.py'
Jan 21 23:31:22 compute-0 sudo[116142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:22 compute-0 ceph-mon[74318]: pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:22 compute-0 python3.9[116144]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:31:22 compute-0 sudo[116142]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:23.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:23 compute-0 sudo[116295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrniiiauaouaozmlnqbjbjbteqpwpbmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038282.9671793-692-240969351899769/AnsiballZ_stat.py'
Jan 21 23:31:23 compute-0 sudo[116295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:23 compute-0 ceph-mon[74318]: 9.1e scrub starts
Jan 21 23:31:23 compute-0 ceph-mon[74318]: 9.1e scrub ok
Jan 21 23:31:23 compute-0 python3.9[116297]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:31:23 compute-0 sudo[116295]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:23 compute-0 sudo[116373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjxgmmqfqgevryasnsgntxdqdygixwoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038282.9671793-692-240969351899769/AnsiballZ_file.py'
Jan 21 23:31:23 compute-0 sudo[116373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:24.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:24 compute-0 python3.9[116375]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:31:24 compute-0 sudo[116373]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:24 compute-0 ceph-mon[74318]: pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:25.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:25 compute-0 sudo[116526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luzsmnkejoryvcljbkxlkyfqpzzbdhtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038285.2021973-746-4766705035229/AnsiballZ_lineinfile.py'
Jan 21 23:31:25 compute-0 sudo[116526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:25 compute-0 python3.9[116528]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:31:25 compute-0 sudo[116526]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:26.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:26 compute-0 ceph-mon[74318]: pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:27.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:31:27 compute-0 sudo[116679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsekqnnvmbvlyxxsximdmzpzmrhnonnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038287.07814-791-254943778436436/AnsiballZ_setup.py'
Jan 21 23:31:27 compute-0 sudo[116679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:27 compute-0 python3.9[116681]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:31:27 compute-0 ceph-mon[74318]: 9.1f scrub starts
Jan 21 23:31:27 compute-0 ceph-mon[74318]: 9.1f scrub ok
Jan 21 23:31:28 compute-0 sudo[116679]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:31:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:28.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:31:28 compute-0 sudo[116763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptlbboxqrumteaxytqrizixiixbejuex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038287.07814-791-254943778436436/AnsiballZ_systemd.py'
Jan 21 23:31:28 compute-0 sudo[116763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:28 compute-0 ceph-mon[74318]: pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:28 compute-0 python3.9[116765]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:31:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:29.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:29 compute-0 sudo[116763]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:31:30.022710) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038290022757, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2385, "num_deletes": 251, "total_data_size": 3544957, "memory_usage": 3615792, "flush_reason": "Manual Compaction"}
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038290045116, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 3424988, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7612, "largest_seqno": 9996, "table_properties": {"data_size": 3414943, "index_size": 5899, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 26149, "raw_average_key_size": 21, "raw_value_size": 3392353, "raw_average_value_size": 2787, "num_data_blocks": 262, "num_entries": 1217, "num_filter_entries": 1217, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769038127, "oldest_key_time": 1769038127, "file_creation_time": 1769038290, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 22713 microseconds, and 7371 cpu microseconds.
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:31:30.045397) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 3424988 bytes OK
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:31:30.045482) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:31:30.063629) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:31:30.063681) EVENT_LOG_v1 {"time_micros": 1769038290063671, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:31:30.063704) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 3534587, prev total WAL file size 3534587, number of live WAL files 2.
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:31:30.064935) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(3344KB)], [20(7602KB)]
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038290065070, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 11209959, "oldest_snapshot_seqno": -1}
Jan 21 23:31:30 compute-0 sshd-session[111288]: Connection closed by 192.168.122.30 port 57418
Jan 21 23:31:30 compute-0 sshd-session[111285]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:31:30 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Jan 21 23:31:30 compute-0 systemd[1]: session-38.scope: Consumed 27.230s CPU time.
Jan 21 23:31:30 compute-0 systemd-logind[786]: Session 38 logged out. Waiting for processes to exit.
Jan 21 23:31:30 compute-0 systemd-logind[786]: Removed session 38.
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3865 keys, 9567854 bytes, temperature: kUnknown
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038290180276, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 9567854, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9536146, "index_size": 20974, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 93277, "raw_average_key_size": 24, "raw_value_size": 9460622, "raw_average_value_size": 2447, "num_data_blocks": 914, "num_entries": 3865, "num_filter_entries": 3865, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769038290, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:31:30.180718) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9567854 bytes
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:31:30.182502) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.2 rd, 83.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.4 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4388, records dropped: 523 output_compression: NoCompression
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:31:30.182541) EVENT_LOG_v1 {"time_micros": 1769038290182521, "job": 6, "event": "compaction_finished", "compaction_time_micros": 115301, "compaction_time_cpu_micros": 33150, "output_level": 6, "num_output_files": 1, "total_output_size": 9567854, "num_input_records": 4388, "num_output_records": 3865, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038290183945, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038290187087, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:31:30.064780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:31:30.187170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:31:30.187177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:31:30.187179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:31:30.187181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:31:30 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:31:30.187183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:31:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:30.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:30 compute-0 sshd-session[116793]: Invalid user ubuntu from 38.67.240.124 port 54217
Jan 21 23:31:30 compute-0 sshd-session[116793]: Received disconnect from 38.67.240.124 port 54217:11:  [preauth]
Jan 21 23:31:30 compute-0 sshd-session[116793]: Disconnected from invalid user ubuntu 38.67.240.124 port 54217 [preauth]
Jan 21 23:31:31 compute-0 ceph-mon[74318]: pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:31.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:32.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:31:33 compute-0 ceph-mon[74318]: pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:33.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:34.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:35 compute-0 ceph-mon[74318]: pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 21 23:31:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:35.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 21 23:31:35 compute-0 sudo[116798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:31:35 compute-0 sudo[116798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:35 compute-0 sudo[116798]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:35 compute-0 sudo[116823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:31:35 compute-0 sudo[116823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:35 compute-0 sudo[116823]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:35 compute-0 sshd-session[116848]: Accepted publickey for zuul from 192.168.122.30 port 50464 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:31:35 compute-0 systemd-logind[786]: New session 39 of user zuul.
Jan 21 23:31:36 compute-0 systemd[1]: Started Session 39 of User zuul.
Jan 21 23:31:36 compute-0 sshd-session[116848]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:31:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:31:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:36.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:31:36 compute-0 sudo[117001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcoiyjodpfrogvfzemrvpunitsdukjzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038296.118671-26-250868548683724/AnsiballZ_file.py'
Jan 21 23:31:36 compute-0 sudo[117001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:36 compute-0 python3.9[117003]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:31:36 compute-0 sudo[117001]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:31:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:37.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:31:37 compute-0 ceph-mon[74318]: pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:31:37 compute-0 sudo[117154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqdlspvfhenrizpverynumldewprhdyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038297.0694492-62-243758763463593/AnsiballZ_stat.py'
Jan 21 23:31:37 compute-0 sudo[117154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:37 compute-0 python3.9[117156]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:31:37 compute-0 sudo[117154]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:38 compute-0 sudo[117232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvgoiwjmsrnrgptbrjqajmcwpvixyjkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038297.0694492-62-243758763463593/AnsiballZ_file.py'
Jan 21 23:31:38 compute-0 sudo[117232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:38.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:38 compute-0 python3.9[117234]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:31:38 compute-0 sudo[117232]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:38 compute-0 sshd-session[116851]: Connection closed by 192.168.122.30 port 50464
Jan 21 23:31:38 compute-0 sshd-session[116848]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:31:38 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Jan 21 23:31:38 compute-0 systemd[1]: session-39.scope: Consumed 1.759s CPU time.
Jan 21 23:31:38 compute-0 systemd-logind[786]: Session 39 logged out. Waiting for processes to exit.
Jan 21 23:31:38 compute-0 systemd-logind[786]: Removed session 39.
Jan 21 23:31:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:39.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:39 compute-0 ceph-mon[74318]: pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:31:39
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['.mgr', 'vms', 'volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:31:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 21 23:31:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:40.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 21 23:31:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:41.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:41 compute-0 ceph-mon[74318]: pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:42.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:31:42 compute-0 ceph-mon[74318]: pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:43.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:44 compute-0 sshd-session[117262]: Accepted publickey for zuul from 192.168.122.30 port 59334 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:31:44 compute-0 systemd-logind[786]: New session 40 of user zuul.
Jan 21 23:31:44 compute-0 systemd[1]: Started Session 40 of User zuul.
Jan 21 23:31:44 compute-0 sshd-session[117262]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:31:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 21 23:31:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:44.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 21 23:31:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:45.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:45 compute-0 python3.9[117415]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:31:45 compute-0 ceph-mon[74318]: pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:46 compute-0 sudo[117570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddaioxqkkavmemzvuaecvudvccuvjzrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038305.7630782-59-104986406286269/AnsiballZ_file.py'
Jan 21 23:31:46 compute-0 sudo[117570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:46.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:46 compute-0 ceph-mon[74318]: pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:46 compute-0 python3.9[117572]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:31:46 compute-0 sudo[117570]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:47.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:47 compute-0 sudo[117746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxycyohlwjbgocgcnruguukrihppeery ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038306.6446474-83-274402444691122/AnsiballZ_stat.py'
Jan 21 23:31:47 compute-0 sudo[117746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:31:47 compute-0 python3.9[117748]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:31:47 compute-0 sudo[117746]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:47 compute-0 sudo[117824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsijegmeuhlwrdkwinrqwerkxmhcprqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038306.6446474-83-274402444691122/AnsiballZ_file.py'
Jan 21 23:31:47 compute-0 sudo[117824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:47 compute-0 python3.9[117826]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.ymykklp2 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:31:47 compute-0 sudo[117824]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:48.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:48 compute-0 ceph-mon[74318]: pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:48 compute-0 sudo[117976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcbmiasppxiivconcgmtwqtfmomudxeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038308.551687-143-82383487383503/AnsiballZ_stat.py'
Jan 21 23:31:48 compute-0 sudo[117976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:31:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:49.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:31:49 compute-0 python3.9[117978]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:31:49 compute-0 sudo[117976]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:49 compute-0 sudo[118055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umuafhukgyrkustgdykehxheeycapemk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038308.551687-143-82383487383503/AnsiballZ_file.py'
Jan 21 23:31:49 compute-0 sudo[118055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:49 compute-0 python3.9[118057]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.2hx211bb recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:31:49 compute-0 sudo[118055]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:50 compute-0 sudo[118207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwmlmkmtdgpppwkkffpqddmjbczrvdli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038309.8818102-182-260006374504844/AnsiballZ_file.py'
Jan 21 23:31:50 compute-0 sudo[118207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:50.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:50 compute-0 python3.9[118209]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:31:50 compute-0 sudo[118207]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:50 compute-0 ceph-mon[74318]: pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:50 compute-0 sudo[118359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emgzidzsuhbnrdoxmqhqairsylaycbqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038310.7005744-206-272290228792529/AnsiballZ_stat.py'
Jan 21 23:31:50 compute-0 sudo[118359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:51.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:51 compute-0 python3.9[118361]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:31:51 compute-0 sudo[118359]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:51 compute-0 sudo[118438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgtqxbfjjhhklndbklnllkcivnizdpwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038310.7005744-206-272290228792529/AnsiballZ_file.py'
Jan 21 23:31:51 compute-0 sudo[118438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:51 compute-0 python3.9[118440]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:31:51 compute-0 sudo[118438]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:52 compute-0 sudo[118590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taskyontukywkebkofoknetpdoxuadcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038311.8302615-206-265614008602267/AnsiballZ_stat.py'
Jan 21 23:31:52 compute-0 sudo[118590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:31:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:52.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:31:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:31:52 compute-0 python3.9[118592]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:31:52 compute-0 sudo[118590]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:52 compute-0 sudo[118668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jklycwmyyniqntszrbmspejypdsaudre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038311.8302615-206-265614008602267/AnsiballZ_file.py'
Jan 21 23:31:52 compute-0 sudo[118668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:52 compute-0 python3.9[118670]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:31:52 compute-0 sudo[118668]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:52 compute-0 ceph-mon[74318]: pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:53.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:53 compute-0 sudo[118821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etojphlwkawyalsjnhdeeqfgffnxyfkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038313.2664354-275-168746031858466/AnsiballZ_file.py'
Jan 21 23:31:53 compute-0 sudo[118821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:53 compute-0 python3.9[118823]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:31:53 compute-0 sudo[118821]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:31:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:31:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:54.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:54 compute-0 sudo[118973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovhkdcdmzelajfopyfptjymxlcnoobjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038313.9881334-299-88555036972656/AnsiballZ_stat.py'
Jan 21 23:31:54 compute-0 sudo[118973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:54 compute-0 python3.9[118975]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:31:54 compute-0 sudo[118973]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:54 compute-0 sudo[119051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjhfwunsashaicwlrqwduxktstnglade ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038313.9881334-299-88555036972656/AnsiballZ_file.py'
Jan 21 23:31:54 compute-0 sudo[119051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:54 compute-0 ceph-mon[74318]: pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:55 compute-0 python3.9[119053]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:31:55 compute-0 sudo[119051]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:31:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:55.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:31:55 compute-0 sudo[119204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwykktkhrftngwntmqppkqccmbpuhngs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038315.363186-335-162276416509251/AnsiballZ_stat.py'
Jan 21 23:31:55 compute-0 sudo[119204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:55 compute-0 sudo[119207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:31:55 compute-0 sudo[119207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:55 compute-0 sudo[119207]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:55 compute-0 sudo[119232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:31:55 compute-0 sudo[119232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:31:55 compute-0 sudo[119232]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:55 compute-0 python3.9[119206]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:31:55 compute-0 sudo[119204]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:56 compute-0 sudo[119332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aygibzmnscxduqecxxnumpzyyztzomds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038315.363186-335-162276416509251/AnsiballZ_file.py'
Jan 21 23:31:56 compute-0 sudo[119332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:31:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:56.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:31:56 compute-0 python3.9[119334]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:31:56 compute-0 sudo[119332]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:57 compute-0 ceph-mon[74318]: pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:31:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:57.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:31:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:31:57 compute-0 sudo[119485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncddmwngnbmslbqvxwyvoztiyebgqfik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038316.6523438-371-44647634114724/AnsiballZ_systemd.py'
Jan 21 23:31:57 compute-0 sudo[119485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:57 compute-0 python3.9[119487]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:31:57 compute-0 systemd[1]: Reloading.
Jan 21 23:31:57 compute-0 systemd-rc-local-generator[119515]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:31:57 compute-0 systemd-sysv-generator[119519]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:31:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:57 compute-0 sudo[119485]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:31:58.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:58 compute-0 sudo[119674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtmrpfuhdrsbnpqcwejpcqolzzdwrubh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038318.253355-395-231117264531027/AnsiballZ_stat.py'
Jan 21 23:31:58 compute-0 sudo[119674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:58 compute-0 python3.9[119676]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:31:58 compute-0 sudo[119674]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:59 compute-0 ceph-mon[74318]: pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:59 compute-0 sudo[119752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziqdulngvffaifbpmzdbrdtklcmpznrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038318.253355-395-231117264531027/AnsiballZ_file.py'
Jan 21 23:31:59 compute-0 sudo[119752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:31:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:31:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:31:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:31:59.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:31:59 compute-0 python3.9[119754]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:31:59 compute-0 sudo[119752]: pam_unix(sudo:session): session closed for user root
Jan 21 23:31:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:31:59 compute-0 sudo[119905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjmgcecrzjvoccveaebgtvrqghccpzds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038319.653149-431-152162150955561/AnsiballZ_stat.py'
Jan 21 23:31:59 compute-0 sudo[119905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:00 compute-0 python3.9[119907]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:32:00 compute-0 sudo[119905]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:00.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:00 compute-0 sudo[119983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tztrcqiijxkldziynlixhfpnipwvjzpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038319.653149-431-152162150955561/AnsiballZ_file.py'
Jan 21 23:32:00 compute-0 sudo[119983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:00 compute-0 python3.9[119985]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:00 compute-0 sudo[119983]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:32:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:01.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:32:01 compute-0 ceph-mon[74318]: pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:01 compute-0 sudo[120136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjocjacjcgkbdyhhvpyksifrznuavanf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038320.977629-467-71692783077264/AnsiballZ_systemd.py'
Jan 21 23:32:01 compute-0 sudo[120136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:01 compute-0 python3.9[120138]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:32:01 compute-0 systemd[1]: Reloading.
Jan 21 23:32:01 compute-0 systemd-rc-local-generator[120167]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:32:01 compute-0 systemd-sysv-generator[120170]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:32:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:01 compute-0 systemd[1]: Starting Create netns directory...
Jan 21 23:32:01 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 21 23:32:01 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 21 23:32:01 compute-0 systemd[1]: Finished Create netns directory.
Jan 21 23:32:01 compute-0 sudo[120136]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:32:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:02.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:32:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:32:02 compute-0 python3.9[120329]: ansible-ansible.builtin.service_facts Invoked
Jan 21 23:32:03 compute-0 network[120346]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 23:32:03 compute-0 network[120347]: 'network-scripts' will be removed from distribution in near future.
Jan 21 23:32:03 compute-0 network[120348]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 23:32:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:32:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:03.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:32:03 compute-0 ceph-mon[74318]: pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:04.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:05.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:05 compute-0 ceph-mon[74318]: pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:32:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:06.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:32:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:32:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:07.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:32:07 compute-0 ceph-mon[74318]: pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:32:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:32:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:08.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:32:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:09.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:32:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:32:09 compute-0 ceph-mon[74318]: pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:32:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:32:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:32:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:32:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:10.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:11 compute-0 sudo[120612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgvamovdsdjnipihjcjajefdwoutqhml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038330.722564-545-48387492339819/AnsiballZ_stat.py'
Jan 21 23:32:11 compute-0 sudo[120612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:11.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:11 compute-0 python3.9[120614]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:32:11 compute-0 ceph-mon[74318]: pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:11 compute-0 sudo[120612]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:11 compute-0 sudo[120691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whmcalfvkgmqgdblhffmgdawndrolsow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038330.722564-545-48387492339819/AnsiballZ_file.py'
Jan 21 23:32:11 compute-0 sudo[120691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:11 compute-0 python3.9[120693]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:11 compute-0 sudo[120691]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:32:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:12.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:32:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:32:12 compute-0 sudo[120843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzwpgzgtyqtrjdlnsnaunerlnaomxlbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038332.1807723-584-74248469768717/AnsiballZ_file.py'
Jan 21 23:32:12 compute-0 sudo[120843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:12 compute-0 python3.9[120845]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:12 compute-0 sudo[120843]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:13.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:13 compute-0 ceph-mon[74318]: pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:13 compute-0 sudo[120996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taujfwmxzwqysiphlgwzprdsszbdkhqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038332.985773-608-160564410189925/AnsiballZ_stat.py'
Jan 21 23:32:13 compute-0 sudo[120996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:13 compute-0 python3.9[120998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:32:13 compute-0 sudo[120996]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:13 compute-0 sudo[121074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldkahnbujykjubefsgbzqbupqhusgjwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038332.985773-608-160564410189925/AnsiballZ_file.py'
Jan 21 23:32:13 compute-0 sudo[121074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:14 compute-0 python3.9[121076]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:14 compute-0 sudo[121074]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:32:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:14.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:32:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:15.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:15 compute-0 sudo[121227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyeknkcejvznhydwrvzqdhxpilnzireo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038334.7598293-653-193408201208947/AnsiballZ_timezone.py'
Jan 21 23:32:15 compute-0 sudo[121227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:15 compute-0 ceph-mon[74318]: pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:15 compute-0 python3.9[121229]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 21 23:32:15 compute-0 systemd[1]: Starting Time & Date Service...
Jan 21 23:32:15 compute-0 systemd[1]: Started Time & Date Service.
Jan 21 23:32:15 compute-0 sudo[121227]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:15 compute-0 sudo[121258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:32:15 compute-0 sudo[121258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:15 compute-0 sudo[121258]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:16 compute-0 sudo[121306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:32:16 compute-0 sudo[121306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:16 compute-0 sudo[121306]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:16 compute-0 sudo[121433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umrpsnjualiyvtadtfdzbwwyjgnvwecg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038335.9823546-680-215251959501125/AnsiballZ_file.py'
Jan 21 23:32:16 compute-0 sudo[121433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:16.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:16 compute-0 ceph-mon[74318]: pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:16 compute-0 python3.9[121435]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:16 compute-0 sudo[121433]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:17 compute-0 sudo[121585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvxexmqxinjefjqewsesxhiyppfkvuud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038336.6993554-704-268567092321859/AnsiballZ_stat.py'
Jan 21 23:32:17 compute-0 sudo[121585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:17.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:17 compute-0 python3.9[121587]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:32:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:32:17 compute-0 sudo[121585]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:17 compute-0 sudo[121664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpualmisoiapiwldsoqvoncepoarcqmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038336.6993554-704-268567092321859/AnsiballZ_file.py'
Jan 21 23:32:17 compute-0 sudo[121664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:17 compute-0 python3.9[121666]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:17 compute-0 sudo[121664]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:18 compute-0 sudo[121691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:32:18 compute-0 sudo[121691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:18 compute-0 sudo[121691]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:18 compute-0 sudo[121717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:32:18 compute-0 sudo[121717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:18 compute-0 sudo[121717]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:18 compute-0 sudo[121772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:32:18 compute-0 sudo[121772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:18 compute-0 sudo[121772]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:18.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:18 compute-0 sudo[121825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:32:18 compute-0 sudo[121825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:18 compute-0 sudo[121918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibnwuhteyobnblhvfccgwilwreckfraj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038338.1002054-740-137978960372654/AnsiballZ_stat.py'
Jan 21 23:32:18 compute-0 sudo[121918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:18 compute-0 python3.9[121925]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:32:18 compute-0 sudo[121918]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:18 compute-0 sudo[121825]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:18 compute-0 sudo[122024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpxfpvfgonjyyksxxhgmlmybdhzcyulb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038338.1002054-740-137978960372654/AnsiballZ_file.py'
Jan 21 23:32:18 compute-0 sudo[122024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:18 compute-0 ceph-mon[74318]: pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:19 compute-0 python3.9[122026]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.k4q83hzo recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:19 compute-0 sudo[122024]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:19.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:19 compute-0 sudo[122177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beiomtxgyjdsyijippnnajzlpyqtgisb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038339.307546-776-202889401913996/AnsiballZ_stat.py'
Jan 21 23:32:19 compute-0 sudo[122177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:32:20 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:32:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:32:20 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:32:20 compute-0 python3.9[122179]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:32:20 compute-0 sudo[122177]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:20.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:20 compute-0 sudo[122255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lllyjknhxfilffnujxpvmjsllemqqghp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038339.307546-776-202889401913996/AnsiballZ_file.py'
Jan 21 23:32:20 compute-0 sudo[122255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:20 compute-0 python3.9[122257]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:20 compute-0 sudo[122255]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:32:20 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:32:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:32:20 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:32:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:32:20 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:32:20 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev c13380eb-2614-4cc5-9b86-a6b5c41b6300 does not exist
Jan 21 23:32:20 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 8ae6d2a5-6f79-4c20-bea8-8e657c5789f4 does not exist
Jan 21 23:32:20 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 33b464d6-9f00-4de9-9a22-2571f7e054f2 does not exist
Jan 21 23:32:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:32:20 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:32:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:32:20 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:32:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:32:20 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:32:20 compute-0 sudo[122324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:32:20 compute-0 sudo[122324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:20 compute-0 sudo[122324]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:21 compute-0 sudo[122359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:32:21 compute-0 sudo[122359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:21 compute-0 sudo[122359]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:21 compute-0 ceph-mon[74318]: pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:32:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:32:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:32:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:32:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:32:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:32:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:32:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:32:21 compute-0 sudo[122384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:32:21 compute-0 sudo[122384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:21 compute-0 sudo[122384]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:21.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:21 compute-0 sudo[122409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:32:21 compute-0 sudo[122409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:21 compute-0 sudo[122515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwzgjcyzqqkfigacfpbdpaslbyvzzirc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038340.8837547-815-208370803435446/AnsiballZ_command.py'
Jan 21 23:32:21 compute-0 sudo[122515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:21 compute-0 python3.9[122525]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:32:21 compute-0 podman[122552]: 2026-01-21 23:32:21.540045631 +0000 UTC m=+0.055659381 container create d8cc242c172e1de0a8b77c3a2a3ec2eff9cf8cae8b43fd2a3234cb6468369199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 21 23:32:21 compute-0 sudo[122515]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:21 compute-0 systemd[1]: Started libpod-conmon-d8cc242c172e1de0a8b77c3a2a3ec2eff9cf8cae8b43fd2a3234cb6468369199.scope.
Jan 21 23:32:21 compute-0 podman[122552]: 2026-01-21 23:32:21.513897309 +0000 UTC m=+0.029511129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:32:21 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:32:21 compute-0 podman[122552]: 2026-01-21 23:32:21.640803912 +0000 UTC m=+0.156417702 container init d8cc242c172e1de0a8b77c3a2a3ec2eff9cf8cae8b43fd2a3234cb6468369199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_knuth, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 23:32:21 compute-0 podman[122552]: 2026-01-21 23:32:21.65381255 +0000 UTC m=+0.169426290 container start d8cc242c172e1de0a8b77c3a2a3ec2eff9cf8cae8b43fd2a3234cb6468369199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:32:21 compute-0 podman[122552]: 2026-01-21 23:32:21.657000993 +0000 UTC m=+0.172614743 container attach d8cc242c172e1de0a8b77c3a2a3ec2eff9cf8cae8b43fd2a3234cb6468369199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 21 23:32:21 compute-0 upbeat_knuth[122572]: 167 167
Jan 21 23:32:21 compute-0 systemd[1]: libpod-d8cc242c172e1de0a8b77c3a2a3ec2eff9cf8cae8b43fd2a3234cb6468369199.scope: Deactivated successfully.
Jan 21 23:32:21 compute-0 podman[122552]: 2026-01-21 23:32:21.662274842 +0000 UTC m=+0.177888622 container died d8cc242c172e1de0a8b77c3a2a3ec2eff9cf8cae8b43fd2a3234cb6468369199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:32:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccfdcbc3fe55fafed70cc13f92028139887aae5db920bfe35823350c3e0d9898-merged.mount: Deactivated successfully.
Jan 21 23:32:21 compute-0 podman[122552]: 2026-01-21 23:32:21.716275869 +0000 UTC m=+0.231889619 container remove d8cc242c172e1de0a8b77c3a2a3ec2eff9cf8cae8b43fd2a3234cb6468369199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:32:21 compute-0 systemd[1]: libpod-conmon-d8cc242c172e1de0a8b77c3a2a3ec2eff9cf8cae8b43fd2a3234cb6468369199.scope: Deactivated successfully.
Jan 21 23:32:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:21 compute-0 podman[122671]: 2026-01-21 23:32:21.877445113 +0000 UTC m=+0.040354578 container create 72bf7349e26bd3a96fbfa008bd768836c61ae99e987e465f53f85d923321b5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:32:21 compute-0 systemd[1]: Started libpod-conmon-72bf7349e26bd3a96fbfa008bd768836c61ae99e987e465f53f85d923321b5a0.scope.
Jan 21 23:32:21 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f80e67e5afac7126b95f63653e080dbff70efc5a4dbc3c28404994d7abb9f743/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f80e67e5afac7126b95f63653e080dbff70efc5a4dbc3c28404994d7abb9f743/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f80e67e5afac7126b95f63653e080dbff70efc5a4dbc3c28404994d7abb9f743/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:32:21 compute-0 podman[122671]: 2026-01-21 23:32:21.862057539 +0000 UTC m=+0.024967004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f80e67e5afac7126b95f63653e080dbff70efc5a4dbc3c28404994d7abb9f743/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f80e67e5afac7126b95f63653e080dbff70efc5a4dbc3c28404994d7abb9f743/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:32:21 compute-0 podman[122671]: 2026-01-21 23:32:21.969246137 +0000 UTC m=+0.132155662 container init 72bf7349e26bd3a96fbfa008bd768836c61ae99e987e465f53f85d923321b5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:32:21 compute-0 podman[122671]: 2026-01-21 23:32:21.98211528 +0000 UTC m=+0.145024735 container start 72bf7349e26bd3a96fbfa008bd768836c61ae99e987e465f53f85d923321b5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:32:21 compute-0 podman[122671]: 2026-01-21 23:32:21.985620353 +0000 UTC m=+0.148529848 container attach 72bf7349e26bd3a96fbfa008bd768836c61ae99e987e465f53f85d923321b5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:32:22 compute-0 sudo[122765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpnwuwpmgyodthaybmtgpawykfuozubf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769038341.7733605-839-2960027240278/AnsiballZ_edpm_nftables_from_files.py'
Jan 21 23:32:22 compute-0 sudo[122765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:22.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:32:22 compute-0 python3[122767]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 21 23:32:22 compute-0 sudo[122765]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:22 compute-0 bold_curran[122687]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:32:22 compute-0 bold_curran[122687]: --> relative data size: 1.0
Jan 21 23:32:22 compute-0 bold_curran[122687]: --> All data devices are unavailable
Jan 21 23:32:22 compute-0 systemd[1]: libpod-72bf7349e26bd3a96fbfa008bd768836c61ae99e987e465f53f85d923321b5a0.scope: Deactivated successfully.
Jan 21 23:32:22 compute-0 podman[122671]: 2026-01-21 23:32:22.865232627 +0000 UTC m=+1.028142082 container died 72bf7349e26bd3a96fbfa008bd768836c61ae99e987e465f53f85d923321b5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:32:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f80e67e5afac7126b95f63653e080dbff70efc5a4dbc3c28404994d7abb9f743-merged.mount: Deactivated successfully.
Jan 21 23:32:22 compute-0 podman[122671]: 2026-01-21 23:32:22.924241545 +0000 UTC m=+1.087150990 container remove 72bf7349e26bd3a96fbfa008bd768836c61ae99e987e465f53f85d923321b5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 21 23:32:22 compute-0 systemd[1]: libpod-conmon-72bf7349e26bd3a96fbfa008bd768836c61ae99e987e465f53f85d923321b5a0.scope: Deactivated successfully.
Jan 21 23:32:22 compute-0 sudo[122409]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:23 compute-0 sudo[122953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnmdkzakiiymaulgyvyhorzzcrdlgrtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038342.6746953-863-47110718331467/AnsiballZ_stat.py'
Jan 21 23:32:23 compute-0 sudo[122953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:23 compute-0 sudo[122925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:32:23 compute-0 sudo[122925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:23 compute-0 sudo[122925]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:23 compute-0 ceph-mon[74318]: pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:23 compute-0 sudo[122968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:32:23 compute-0 sudo[122968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:23 compute-0 sudo[122968]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:23 compute-0 sudo[122993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:32:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:32:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:23.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:32:23 compute-0 sudo[122993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:23 compute-0 sudo[122993]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:23 compute-0 sudo[123019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:32:23 compute-0 sudo[123019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:23 compute-0 python3.9[122965]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:32:23 compute-0 sudo[122953]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:23 compute-0 podman[123135]: 2026-01-21 23:32:23.64346775 +0000 UTC m=+0.050780324 container create a697b7f6e23822f8156abafb572d890f8ceb259d065c802742e29a9c8c14f179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 23:32:23 compute-0 sudo[123172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihwsdocjutuivvdbznylvehmipgoqxcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038342.6746953-863-47110718331467/AnsiballZ_file.py'
Jan 21 23:32:23 compute-0 sudo[123172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:23 compute-0 systemd[1]: Started libpod-conmon-a697b7f6e23822f8156abafb572d890f8ceb259d065c802742e29a9c8c14f179.scope.
Jan 21 23:32:23 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:32:23 compute-0 podman[123135]: 2026-01-21 23:32:23.71654308 +0000 UTC m=+0.123855684 container init a697b7f6e23822f8156abafb572d890f8ceb259d065c802742e29a9c8c14f179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_stonebraker, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:32:23 compute-0 podman[123135]: 2026-01-21 23:32:23.628464337 +0000 UTC m=+0.035776941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:32:23 compute-0 podman[123135]: 2026-01-21 23:32:23.723280397 +0000 UTC m=+0.130592971 container start a697b7f6e23822f8156abafb572d890f8ceb259d065c802742e29a9c8c14f179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_stonebraker, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 23:32:23 compute-0 eloquent_stonebraker[123181]: 167 167
Jan 21 23:32:23 compute-0 systemd[1]: libpod-a697b7f6e23822f8156abafb572d890f8ceb259d065c802742e29a9c8c14f179.scope: Deactivated successfully.
Jan 21 23:32:23 compute-0 podman[123135]: 2026-01-21 23:32:23.733411253 +0000 UTC m=+0.140723847 container attach a697b7f6e23822f8156abafb572d890f8ceb259d065c802742e29a9c8c14f179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 23:32:23 compute-0 podman[123135]: 2026-01-21 23:32:23.733875248 +0000 UTC m=+0.141187822 container died a697b7f6e23822f8156abafb572d890f8ceb259d065c802742e29a9c8c14f179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_stonebraker, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:32:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8bfbfb554332f4c1ca20439cec428d28039639f946b92db7f2c74f528f71f05-merged.mount: Deactivated successfully.
Jan 21 23:32:23 compute-0 podman[123135]: 2026-01-21 23:32:23.780013772 +0000 UTC m=+0.187326356 container remove a697b7f6e23822f8156abafb572d890f8ceb259d065c802742e29a9c8c14f179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:32:23 compute-0 systemd[1]: libpod-conmon-a697b7f6e23822f8156abafb572d890f8ceb259d065c802742e29a9c8c14f179.scope: Deactivated successfully.
Jan 21 23:32:23 compute-0 python3.9[123178]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:23 compute-0 sudo[123172]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:23 compute-0 podman[123223]: 2026-01-21 23:32:23.937917881 +0000 UTC m=+0.041660581 container create 5ce664b7dbe3a1acea18d9cf495d610d4270baac3363e6b763f4bfd5be77c464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ramanujan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:32:23 compute-0 systemd[1]: Started libpod-conmon-5ce664b7dbe3a1acea18d9cf495d610d4270baac3363e6b763f4bfd5be77c464.scope.
Jan 21 23:32:24 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85266637e4c496ef8dc15b1e553980b0eb01af8e352ca915554d17eec65a2b13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85266637e4c496ef8dc15b1e553980b0eb01af8e352ca915554d17eec65a2b13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85266637e4c496ef8dc15b1e553980b0eb01af8e352ca915554d17eec65a2b13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85266637e4c496ef8dc15b1e553980b0eb01af8e352ca915554d17eec65a2b13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:32:24 compute-0 podman[123223]: 2026-01-21 23:32:23.919695055 +0000 UTC m=+0.023437775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:32:24 compute-0 podman[123223]: 2026-01-21 23:32:24.028800405 +0000 UTC m=+0.132543105 container init 5ce664b7dbe3a1acea18d9cf495d610d4270baac3363e6b763f4bfd5be77c464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 23:32:24 compute-0 podman[123223]: 2026-01-21 23:32:24.038224298 +0000 UTC m=+0.141966988 container start 5ce664b7dbe3a1acea18d9cf495d610d4270baac3363e6b763f4bfd5be77c464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ramanujan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:32:24 compute-0 podman[123223]: 2026-01-21 23:32:24.040857963 +0000 UTC m=+0.144600653 container attach 5ce664b7dbe3a1acea18d9cf495d610d4270baac3363e6b763f4bfd5be77c464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 21 23:32:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:24.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:24 compute-0 sudo[123375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhntxmdtlkzsekulbzoxuhxwhojaqjwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038344.1155155-899-56913930246122/AnsiballZ_stat.py'
Jan 21 23:32:24 compute-0 sudo[123375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:24 compute-0 python3.9[123377]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:32:24 compute-0 sudo[123375]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]: {
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:     "1": [
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:         {
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:             "devices": [
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:                 "/dev/loop3"
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:             ],
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:             "lv_name": "ceph_lv0",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:             "lv_size": "7511998464",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:             "name": "ceph_lv0",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:             "tags": {
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:                 "ceph.cluster_name": "ceph",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:                 "ceph.crush_device_class": "",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:                 "ceph.encrypted": "0",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:                 "ceph.osd_id": "1",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:                 "ceph.type": "block",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:                 "ceph.vdo": "0"
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:             },
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:             "type": "block",
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:             "vg_name": "ceph_vg0"
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:         }
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]:     ]
Jan 21 23:32:24 compute-0 sad_ramanujan[123245]: }
Jan 21 23:32:24 compute-0 systemd[1]: libpod-5ce664b7dbe3a1acea18d9cf495d610d4270baac3363e6b763f4bfd5be77c464.scope: Deactivated successfully.
Jan 21 23:32:24 compute-0 podman[123223]: 2026-01-21 23:32:24.804089992 +0000 UTC m=+0.907832682 container died 5ce664b7dbe3a1acea18d9cf495d610d4270baac3363e6b763f4bfd5be77c464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:32:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-85266637e4c496ef8dc15b1e553980b0eb01af8e352ca915554d17eec65a2b13-merged.mount: Deactivated successfully.
Jan 21 23:32:24 compute-0 podman[123223]: 2026-01-21 23:32:24.862985067 +0000 UTC m=+0.966727757 container remove 5ce664b7dbe3a1acea18d9cf495d610d4270baac3363e6b763f4bfd5be77c464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:32:24 compute-0 systemd[1]: libpod-conmon-5ce664b7dbe3a1acea18d9cf495d610d4270baac3363e6b763f4bfd5be77c464.scope: Deactivated successfully.
Jan 21 23:32:24 compute-0 sudo[123019]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:25 compute-0 sudo[123446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:32:25 compute-0 sudo[123446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:25 compute-0 sudo[123446]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:25 compute-0 sudo[123494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:32:25 compute-0 sudo[123494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:25 compute-0 sudo[123494]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:25 compute-0 ceph-mon[74318]: pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:25 compute-0 sudo[123543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:32:25 compute-0 sudo[123543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:25 compute-0 sudo[123543]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:32:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:25.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:32:25 compute-0 sudo[123595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncsyjldhmdtemsikavutwrctnqlxwbyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038344.1155155-899-56913930246122/AnsiballZ_copy.py'
Jan 21 23:32:25 compute-0 sudo[123595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:25 compute-0 sudo[123596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:32:25 compute-0 sudo[123596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:25 compute-0 python3.9[123615]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038344.1155155-899-56913930246122/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:25 compute-0 sudo[123595]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:25 compute-0 podman[123689]: 2026-01-21 23:32:25.590973043 +0000 UTC m=+0.048991117 container create bcc35e0c9a37879a2dcb9781914ceca2bf38cacf547f1274e74d5692efc987c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 21 23:32:25 compute-0 systemd[1]: Started libpod-conmon-bcc35e0c9a37879a2dcb9781914ceca2bf38cacf547f1274e74d5692efc987c6.scope.
Jan 21 23:32:25 compute-0 podman[123689]: 2026-01-21 23:32:25.566906398 +0000 UTC m=+0.024924452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:32:25 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:32:25 compute-0 podman[123689]: 2026-01-21 23:32:25.678381725 +0000 UTC m=+0.136399769 container init bcc35e0c9a37879a2dcb9781914ceca2bf38cacf547f1274e74d5692efc987c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:32:25 compute-0 podman[123689]: 2026-01-21 23:32:25.685195134 +0000 UTC m=+0.143213178 container start bcc35e0c9a37879a2dcb9781914ceca2bf38cacf547f1274e74d5692efc987c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:32:25 compute-0 podman[123689]: 2026-01-21 23:32:25.689321187 +0000 UTC m=+0.147339221 container attach bcc35e0c9a37879a2dcb9781914ceca2bf38cacf547f1274e74d5692efc987c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:32:25 compute-0 sleepy_cartwright[123748]: 167 167
Jan 21 23:32:25 compute-0 systemd[1]: libpod-bcc35e0c9a37879a2dcb9781914ceca2bf38cacf547f1274e74d5692efc987c6.scope: Deactivated successfully.
Jan 21 23:32:25 compute-0 podman[123689]: 2026-01-21 23:32:25.691272799 +0000 UTC m=+0.149290863 container died bcc35e0c9a37879a2dcb9781914ceca2bf38cacf547f1274e74d5692efc987c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:32:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6511b3e5aa1a289901e9ac8fa2c84c1939331ab0e4e04d4ff431eb0300ffe9e-merged.mount: Deactivated successfully.
Jan 21 23:32:25 compute-0 podman[123689]: 2026-01-21 23:32:25.730910914 +0000 UTC m=+0.188928948 container remove bcc35e0c9a37879a2dcb9781914ceca2bf38cacf547f1274e74d5692efc987c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 21 23:32:25 compute-0 systemd[1]: libpod-conmon-bcc35e0c9a37879a2dcb9781914ceca2bf38cacf547f1274e74d5692efc987c6.scope: Deactivated successfully.
Jan 21 23:32:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:25 compute-0 podman[123825]: 2026-01-21 23:32:25.905129849 +0000 UTC m=+0.048013226 container create 54099ac905166d1ba855e22b34fd789c835c953ef53fdbdeecc11d54db7253d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:32:25 compute-0 sudo[123863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmzibilliswrvzwledznfowwdilngagb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038345.5765371-944-259992188714025/AnsiballZ_stat.py'
Jan 21 23:32:25 compute-0 sudo[123863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:25 compute-0 systemd[1]: Started libpod-conmon-54099ac905166d1ba855e22b34fd789c835c953ef53fdbdeecc11d54db7253d4.scope.
Jan 21 23:32:25 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7ebd4d5181495a25916f3640fef1f98fddcbeb960609091062caf851aa3851/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7ebd4d5181495a25916f3640fef1f98fddcbeb960609091062caf851aa3851/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7ebd4d5181495a25916f3640fef1f98fddcbeb960609091062caf851aa3851/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7ebd4d5181495a25916f3640fef1f98fddcbeb960609091062caf851aa3851/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:32:25 compute-0 podman[123825]: 2026-01-21 23:32:25.880905229 +0000 UTC m=+0.023788626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:32:25 compute-0 podman[123825]: 2026-01-21 23:32:25.989913385 +0000 UTC m=+0.132796822 container init 54099ac905166d1ba855e22b34fd789c835c953ef53fdbdeecc11d54db7253d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:32:25 compute-0 podman[123825]: 2026-01-21 23:32:25.999246175 +0000 UTC m=+0.142129552 container start 54099ac905166d1ba855e22b34fd789c835c953ef53fdbdeecc11d54db7253d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Jan 21 23:32:26 compute-0 podman[123825]: 2026-01-21 23:32:26.003441621 +0000 UTC m=+0.146324988 container attach 54099ac905166d1ba855e22b34fd789c835c953ef53fdbdeecc11d54db7253d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 23:32:26 compute-0 python3.9[123865]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:32:26 compute-0 sudo[123863]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:26.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:26 compute-0 sudo[123949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skgwbahravvblbadmjprrhyqeqkpkbfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038345.5765371-944-259992188714025/AnsiballZ_file.py'
Jan 21 23:32:26 compute-0 sudo[123949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:26 compute-0 python3.9[123951]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:26 compute-0 sudo[123949]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:26 compute-0 stupefied_jackson[123869]: {
Jan 21 23:32:26 compute-0 stupefied_jackson[123869]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:32:26 compute-0 stupefied_jackson[123869]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:32:26 compute-0 stupefied_jackson[123869]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:32:26 compute-0 stupefied_jackson[123869]:         "osd_id": 1,
Jan 21 23:32:26 compute-0 stupefied_jackson[123869]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:32:26 compute-0 stupefied_jackson[123869]:         "type": "bluestore"
Jan 21 23:32:26 compute-0 stupefied_jackson[123869]:     }
Jan 21 23:32:26 compute-0 stupefied_jackson[123869]: }
Jan 21 23:32:26 compute-0 systemd[1]: libpod-54099ac905166d1ba855e22b34fd789c835c953ef53fdbdeecc11d54db7253d4.scope: Deactivated successfully.
Jan 21 23:32:26 compute-0 podman[123825]: 2026-01-21 23:32:26.870275283 +0000 UTC m=+1.013158620 container died 54099ac905166d1ba855e22b34fd789c835c953ef53fdbdeecc11d54db7253d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jackson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:32:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c7ebd4d5181495a25916f3640fef1f98fddcbeb960609091062caf851aa3851-merged.mount: Deactivated successfully.
Jan 21 23:32:26 compute-0 podman[123825]: 2026-01-21 23:32:26.935954735 +0000 UTC m=+1.078838082 container remove 54099ac905166d1ba855e22b34fd789c835c953ef53fdbdeecc11d54db7253d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:32:26 compute-0 systemd[1]: libpod-conmon-54099ac905166d1ba855e22b34fd789c835c953ef53fdbdeecc11d54db7253d4.scope: Deactivated successfully.
Jan 21 23:32:26 compute-0 sudo[123596]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:26 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:32:26 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:32:26 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:32:26 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:32:26 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev ff6b13f3-0ee7-4deb-8ee2-1fe5ee99eca5 does not exist
Jan 21 23:32:26 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 81dfe342-8f95-49b5-95aa-080c0d15b09c does not exist
Jan 21 23:32:26 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 8c869cad-a4a1-4587-89a8-1a2db62c4b42 does not exist
Jan 21 23:32:27 compute-0 sudo[124080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:32:27 compute-0 sudo[124080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:27 compute-0 sudo[124080]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:27 compute-0 sudo[124131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:32:27 compute-0 sudo[124131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:27 compute-0 sudo[124131]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:27 compute-0 sudo[124178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obaxzwkcvusjlwuznyxqricdqjsdhirk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038346.8325577-980-52569268868769/AnsiballZ_stat.py'
Jan 21 23:32:27 compute-0 sudo[124178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:27 compute-0 ceph-mon[74318]: pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:27 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:32:27 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:32:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:27.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:27 compute-0 python3.9[124182]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:32:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:32:27 compute-0 sudo[124178]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:27 compute-0 sudo[124259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlypvpcczuxgjktreqnaoupsnrgbjsap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038346.8325577-980-52569268868769/AnsiballZ_file.py'
Jan 21 23:32:27 compute-0 sudo[124259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:27 compute-0 python3.9[124261]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:27 compute-0 sudo[124259]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:28.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:28 compute-0 ceph-mon[74318]: pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:28 compute-0 sudo[124411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxlryyewrbqvdvxxweiohmdegozuiije ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038348.2421556-1016-279962362435156/AnsiballZ_stat.py'
Jan 21 23:32:28 compute-0 sudo[124411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:28 compute-0 python3.9[124413]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:32:28 compute-0 sudo[124411]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:29.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:29 compute-0 sudo[124490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryuvdfalkvgotaqcnouiytqwuposcznn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038348.2421556-1016-279962362435156/AnsiballZ_file.py'
Jan 21 23:32:29 compute-0 sudo[124490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:29 compute-0 python3.9[124492]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:29 compute-0 sudo[124490]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:30 compute-0 sudo[124642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euwkozpxlrphhwirqtbryypcczefrdev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038349.7864816-1055-184464240019371/AnsiballZ_command.py'
Jan 21 23:32:30 compute-0 sudo[124642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:30 compute-0 python3.9[124644]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:32:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:30.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:30 compute-0 sudo[124642]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:30 compute-0 ceph-mon[74318]: pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:31.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:31 compute-0 sshd-session[71293]: Received disconnect from 38.102.83.184 port 54696:11: disconnected by user
Jan 21 23:32:31 compute-0 sshd-session[71293]: Disconnected from user zuul 38.102.83.184 port 54696
Jan 21 23:32:31 compute-0 sshd-session[71290]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:32:31 compute-0 systemd-logind[786]: Session 18 logged out. Waiting for processes to exit.
Jan 21 23:32:31 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 21 23:32:31 compute-0 systemd[1]: session-18.scope: Consumed 1min 23.768s CPU time.
Jan 21 23:32:31 compute-0 systemd-logind[786]: Removed session 18.
Jan 21 23:32:31 compute-0 sudo[124798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpnvagdnxzjhgqojpmgxczolvowulszz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038350.9300988-1079-139417856343907/AnsiballZ_blockinfile.py'
Jan 21 23:32:31 compute-0 sudo[124798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:31 compute-0 python3.9[124800]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:31 compute-0 sudo[124798]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:32.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:32:32 compute-0 sudo[124950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivwfiadeefyrimperrkxcmsftehlvyme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038352.058344-1106-61353605258310/AnsiballZ_file.py'
Jan 21 23:32:32 compute-0 sudo[124950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:32 compute-0 python3.9[124952]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:32 compute-0 sudo[124950]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:32 compute-0 ceph-mon[74318]: pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:33 compute-0 sudo[125102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goetkxwvbxctwbytwlcochpgfphhoupr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038352.7665246-1106-185123426812085/AnsiballZ_file.py'
Jan 21 23:32:33 compute-0 sudo[125102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:33.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:33 compute-0 python3.9[125104]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:33 compute-0 sudo[125102]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:33 compute-0 sudo[125255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rviphwqgirkrgmavkmvjjqhrxqtmsexa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038353.4893005-1151-143563729750120/AnsiballZ_mount.py'
Jan 21 23:32:34 compute-0 sudo[125255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:34 compute-0 python3.9[125257]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 21 23:32:34 compute-0 sudo[125255]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:32:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:34.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:32:34 compute-0 sudo[125407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gywecjetuieprudgprngaaykjcpiypbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038354.409219-1151-187313421899131/AnsiballZ_mount.py'
Jan 21 23:32:34 compute-0 sudo[125407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:34 compute-0 ceph-mon[74318]: pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:34 compute-0 python3.9[125409]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 21 23:32:35 compute-0 sudo[125407]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:35.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:35 compute-0 sshd-session[117265]: Connection closed by 192.168.122.30 port 59334
Jan 21 23:32:35 compute-0 sshd-session[117262]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:32:35 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Jan 21 23:32:35 compute-0 systemd[1]: session-40.scope: Consumed 33.241s CPU time.
Jan 21 23:32:35 compute-0 systemd-logind[786]: Session 40 logged out. Waiting for processes to exit.
Jan 21 23:32:35 compute-0 systemd-logind[786]: Removed session 40.
Jan 21 23:32:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:36 compute-0 sudo[125435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:32:36 compute-0 sudo[125435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:36 compute-0 sudo[125435]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:36 compute-0 sudo[125460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:32:36 compute-0 sudo[125460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:36 compute-0 sudo[125460]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:32:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:36.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:32:36 compute-0 ceph-mon[74318]: pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:37.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:37 compute-0 irqbalance[782]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 21 23:32:37 compute-0 irqbalance[782]: IRQ 26 affinity is now unmanaged
Jan 21 23:32:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:32:37.318218) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038357318315, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 785, "num_deletes": 250, "total_data_size": 1175232, "memory_usage": 1197696, "flush_reason": "Manual Compaction"}
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038357326880, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 750375, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9997, "largest_seqno": 10781, "table_properties": {"data_size": 746989, "index_size": 1230, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8497, "raw_average_key_size": 19, "raw_value_size": 739898, "raw_average_value_size": 1728, "num_data_blocks": 54, "num_entries": 428, "num_filter_entries": 428, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769038291, "oldest_key_time": 1769038291, "file_creation_time": 1769038357, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 8734 microseconds, and 2838 cpu microseconds.
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:32:37.326963) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 750375 bytes OK
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:32:37.326980) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:32:37.330831) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:32:37.330846) EVENT_LOG_v1 {"time_micros": 1769038357330841, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:32:37.330866) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1171407, prev total WAL file size 1171407, number of live WAL files 2.
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:32:37.331472) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(732KB)], [23(9343KB)]
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038357331629, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 10318229, "oldest_snapshot_seqno": -1}
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3805 keys, 7768041 bytes, temperature: kUnknown
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038357394965, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 7768041, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7739581, "index_size": 17854, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9541, "raw_key_size": 92467, "raw_average_key_size": 24, "raw_value_size": 7667860, "raw_average_value_size": 2015, "num_data_blocks": 778, "num_entries": 3805, "num_filter_entries": 3805, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769038357, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:32:37.395640) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7768041 bytes
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:32:37.396950) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.9 rd, 121.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(24.1) write-amplify(10.4) OK, records in: 4293, records dropped: 488 output_compression: NoCompression
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:32:37.396973) EVENT_LOG_v1 {"time_micros": 1769038357396960, "job": 8, "event": "compaction_finished", "compaction_time_micros": 63719, "compaction_time_cpu_micros": 23869, "output_level": 6, "num_output_files": 1, "total_output_size": 7768041, "num_input_records": 4293, "num_output_records": 3805, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038357397281, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038357399210, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:32:37.331361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:32:37.399265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:32:37.399272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:32:37.399273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:32:37.399276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:32:37 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:32:37.399277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:32:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:38.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:38 compute-0 ceph-mon[74318]: pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:39.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:32:39
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'volumes', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'vms']
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:32:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:40 compute-0 sshd-session[125487]: Accepted publickey for zuul from 192.168.122.30 port 58332 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:32:40 compute-0 systemd-logind[786]: New session 41 of user zuul.
Jan 21 23:32:40 compute-0 systemd[1]: Started Session 41 of User zuul.
Jan 21 23:32:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:32:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:40.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:32:40 compute-0 sshd-session[125487]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:32:40 compute-0 ceph-mon[74318]: pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:40 compute-0 sudo[125640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsjluoarksgttdprxqqexokbthwlzrlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038360.4209352-23-56575549974901/AnsiballZ_tempfile.py'
Jan 21 23:32:40 compute-0 sudo[125640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:41 compute-0 python3.9[125642]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 21 23:32:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:41.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:41 compute-0 sudo[125640]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:41 compute-0 sudo[125793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqojtnhvxlwkkavdbjmpbxqondzdcafe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038361.3734918-59-61358143303339/AnsiballZ_stat.py'
Jan 21 23:32:41 compute-0 sudo[125793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:41 compute-0 python3.9[125795]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:32:42 compute-0 sudo[125793]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:32:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:42.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:32:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:32:42 compute-0 sudo[125947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvzivsakndtuzttxeqwajzxnacjrfjjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038362.277321-83-2464442415397/AnsiballZ_slurp.py'
Jan 21 23:32:42 compute-0 sudo[125947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:42 compute-0 ceph-mon[74318]: pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:42 compute-0 python3.9[125949]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 21 23:32:42 compute-0 sudo[125947]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:43.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:43 compute-0 sudo[126100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfghdeupgnivguizokjmvlwfazxdlwfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038363.2671618-107-61583142896144/AnsiballZ_stat.py'
Jan 21 23:32:43 compute-0 sudo[126100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:43 compute-0 python3.9[126102]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.lkhjomch follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:32:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:43 compute-0 sudo[126100]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:32:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:44.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:32:44 compute-0 sudo[126225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvdjpctfofcahqrjzshxzqcyamumowyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038363.2671618-107-61583142896144/AnsiballZ_copy.py'
Jan 21 23:32:44 compute-0 sudo[126225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:44 compute-0 python3.9[126227]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.lkhjomch mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769038363.2671618-107-61583142896144/.source.lkhjomch _original_basename=.alv_3x6d follow=False checksum=3bf5427aee2228d59619670803ca122ed8d124f3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:44 compute-0 sudo[126225]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:44 compute-0 ceph-mon[74318]: pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:45.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:45 compute-0 sudo[126378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxxzhairynquhchcvlipqeptxmuvydbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038364.8711014-152-46698301478050/AnsiballZ_setup.py'
Jan 21 23:32:45 compute-0 sudo[126378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:45 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 21 23:32:45 compute-0 python3.9[126380]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:32:45 compute-0 sudo[126378]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:46.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:46 compute-0 sudo[126533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxigythbvfxdpiqgpzueeakpnpifhlub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038366.113028-177-96306695582877/AnsiballZ_blockinfile.py'
Jan 21 23:32:46 compute-0 sudo[126533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:46 compute-0 python3.9[126535]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD7OVoFJg81ARdVJA6FyjUI977hlEvtniq2MXKgT3+nUajJ/3zk0XrH9mqvnc7jGz1Fq9+A4wkfRZtVnIrSpwkWbHn3JVL+1mcHJJ6dVIN4pspwgMzeYWm8GG4IYxREKgFCO78ae7vy8DLO9Yi4L+xt6d8Uni8chzNjGMRPdF4FSt+CXwzwGzOQJML3t+bTWLuZRnYroDhrVD0w4AlD+nalMPzvjpAzMn5ZQVTYkQ8sZR7AHw27yAtolX0jzmhql0UCKLUOmWMZxFbGWBTcLCT4COxHXJN+STZ0AbVq1vYG6dQJybeUzXYasq5HK7jx4CFgZTCROxv0lWjOXbN6QbVPVUhxl7tourrbcBhURHA2b9PYkDUIWGqbvaZRWT2PFnTFUx8TCdZZhJRdB+UuryMzpiQ/SHsWtLHR8EVChV7JhPjRfsGibqpF/aqGE9vdiOdM3Ropqlgqn8bSVdD2DPsuKl/UBu0CnLmqPBtozX7rBGvtP/vXyrstFdMWspO/tQs=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBA9Ham04cvw39gDVvgsX1L6qw86QKeK+eylBdUgm9ej
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKIaRu8jlZgKyVs6rhHSbKal+29RD+wf0CzqvOjZMOqqZElzcAyYT09MEy7bg54xF2mQd4qnfLLyE+7XxpD7dZY=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6zIwRdzuVPSMYHryNuK9eshVk/94AdKSgczxgPpAUAgv1pRdk1RrZxNhBhnpleF1/WCOT3PGscfxf/xua2WZKIZe0Qb1MOHOok2+eI5T7qv3bh7JsxcGnnpHvypsZIC6uaEmQu8mt+yBg9IJcFDJNwOkM+LyWbF4jRxU32MW//D7snXiyYKce7U5n921ZpxWpX0wQpiGSvvhVaSKgjJ12Qm30AfCwc9Gl/dwJ+8SB/VKfcPK5dGnaKteOlDj32FuT5VwsZRTuwmLsXZEjTwzbJbx32BZD+MOVGVlsT2BzorpcSbGf3yJh/qNmuRQLEBR8QcgTOQ8nh/e1hHXpfpg2liVLFbQtnRLaT+Ag65R8Tau6cHlQisMu5YBmFvY9q7EACrxe7Uavv7N19DoAG1AJejelEEReYaGNzIddWd8jLxM/c5UWsFVHZYuOqlvA9pQLCgFIeOQZkLRQnB32SoCptagf95NlDEARDFeCjQQjTCIRd22xbCDCDk47B6blY0k=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINHR/2fehxatNJgT9VzNjvKNTWkTHFG641eICQ8hedGu
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBp3CfTIdOEoh91MPd1RH3hVEuEee5LbmruYGsmGAX+dvECmqm9iE9VXKTlo8wPu5sj6SzmxIcTnNG3XoPMq2SE=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCcTCSmBDjNHSZfaHG8fKTwJ4GF2jXfxozwo4mW5kcz/MAe+h5bas2d3r9FavORp/q67J4ZkPP2YsZprpzH/cCMpCJy4msytgeGplSBQmMw4Mybm9FemjlDMz+p8hES75I/8Lsrn0hI2jnW06F3l2pmJ3lg6xHUBqBTbLCh9S5FEHDnzzBfekLREeN4Vo8hRbDxXVEf1J/9OrEtSgNBBGVlAX8166VfPo2u+DIPXKcYFO80JpSHMFkcAGwQKkiBzVg18RmbA0LZVc2J659He3C8sLe01q8pTBbmtS9OaAWL27r9vC5f+yYkt/b+aHborYPFYHzyXpO7qNx28Aq5S4eFs7susZNV1FTL2beXRfOlYBLrFwy95VtxeFQi/OwO3YX8jhPpv08c9BY2+U7t3+kXcRQpbYcNnryIdCrUQ22eqogka311YLSnGbaPXjBMygOMU3wsKYpFSVMEXeT0Bg/ZNhaAkD9NNVE8+EE5ycnJTe1l4czVuAEmGwypQ1HgGok=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOfVjUuuW00Wki8wzseTLka/NNgXZv01yFssrjqPd+vx
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFxAU6jnvKfeGnQCjanLn0gpiYTpeExRBIXO5JrMYzMY98jAeCG9Lktt11h9g/CH/mue3MKLaP3lm3xf41m6zbk=
                                              create=True mode=0644 path=/tmp/ansible.lkhjomch state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:46 compute-0 sudo[126533]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:46 compute-0 ceph-mon[74318]: pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:47.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:32:47 compute-0 sudo[126686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iowfkslwknmjuccczecvflhypylvogma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038367.1476874-201-214222175942446/AnsiballZ_command.py'
Jan 21 23:32:47 compute-0 sudo[126686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:47 compute-0 python3.9[126688]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.lkhjomch' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:32:47 compute-0 sudo[126686]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:32:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:48.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:32:48 compute-0 sudo[126840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeeiphiiioqetclpdtzfeuetnfcxabcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038368.101251-225-226220276623692/AnsiballZ_file.py'
Jan 21 23:32:48 compute-0 sudo[126840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:48 compute-0 python3.9[126842]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.lkhjomch state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:32:48 compute-0 sudo[126840]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:49 compute-0 sshd-session[125490]: Connection closed by 192.168.122.30 port 58332
Jan 21 23:32:49 compute-0 sshd-session[125487]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:32:49 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Jan 21 23:32:49 compute-0 systemd[1]: session-41.scope: Consumed 5.787s CPU time.
Jan 21 23:32:49 compute-0 systemd-logind[786]: Session 41 logged out. Waiting for processes to exit.
Jan 21 23:32:49 compute-0 systemd-logind[786]: Removed session 41.
Jan 21 23:32:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:49.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:49 compute-0 ceph-mon[74318]: pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:50.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:51.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:51 compute-0 ceph-mon[74318]: pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 23:32:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2404 writes, 10K keys, 2403 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2404 writes, 2403 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2404 writes, 10K keys, 2403 commit groups, 1.0 writes per commit group, ingest: 13.69 MB, 0.02 MB/s
                                           Interval WAL: 2404 writes, 2403 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     82.3      0.14              0.03         4    0.035       0      0       0.0       0.0
                                             L6      1/0    7.41 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1    100.2     85.7      0.28              0.08         3    0.093     12K   1302       0.0       0.0
                                            Sum      1/0    7.41 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     66.9     84.6      0.42              0.11         7    0.060     12K   1302       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     67.6     85.4      0.41              0.11         6    0.069     12K   1302       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    100.2     85.7      0.28              0.08         3    0.093     12K   1302       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     84.8      0.13              0.03         3    0.045       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.011, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.4 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f1db2f1f0#2 capacity: 304.00 MB usage: 1.14 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 8.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(53,1.00 MB,0.330368%) FilterBlock(8,41.98 KB,0.013487%) IndexBlock(8,92.06 KB,0.0295739%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 21 23:32:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:32:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:32:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:52.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:32:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:32:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:53.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:32:53 compute-0 ceph-mon[74318]: pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:32:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:32:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:54.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:54 compute-0 sshd-session[126870]: Accepted publickey for zuul from 192.168.122.30 port 34244 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:32:54 compute-0 systemd-logind[786]: New session 42 of user zuul.
Jan 21 23:32:54 compute-0 systemd[1]: Started Session 42 of User zuul.
Jan 21 23:32:54 compute-0 sshd-session[126870]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:32:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:55.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:55 compute-0 ceph-mon[74318]: pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:55 compute-0 python3.9[127024]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:32:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:56 compute-0 sudo[127105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:32:56 compute-0 sudo[127105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:56 compute-0 sudo[127105]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:32:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:56.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:32:56 compute-0 sudo[127130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:32:56 compute-0 sudo[127130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:32:56 compute-0 sudo[127130]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:56 compute-0 sudo[127228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgtgfjkisqjxtojtfoncoxenpjkbwwtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038376.0843265-56-241965668073251/AnsiballZ_systemd.py'
Jan 21 23:32:56 compute-0 sudo[127228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:57 compute-0 python3.9[127230]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 21 23:32:57 compute-0 sudo[127228]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:32:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:57.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:32:57 compute-0 ceph-mon[74318]: pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:32:57 compute-0 sudo[127383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljcucqqjrahijemlkouykkhafffaktrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038377.4003363-80-208004654685811/AnsiballZ_systemd.py'
Jan 21 23:32:57 compute-0 sudo[127383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:58 compute-0 python3.9[127385]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:32:58 compute-0 sudo[127383]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:32:58.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:58 compute-0 ceph-mon[74318]: pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:58 compute-0 sudo[127536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znrysfnwghwbujhkdhjcyagyyzswdhfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038378.4576156-107-249610551297494/AnsiballZ_command.py'
Jan 21 23:32:58 compute-0 sudo[127536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:32:59 compute-0 python3.9[127538]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:32:59 compute-0 sudo[127536]: pam_unix(sudo:session): session closed for user root
Jan 21 23:32:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:32:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:32:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:32:59.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:32:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:32:59 compute-0 sudo[127690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xncqvgesvbijaivoiyyszlrsmqopovcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038379.3845344-131-172388818826130/AnsiballZ_stat.py'
Jan 21 23:32:59 compute-0 sudo[127690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:00 compute-0 python3.9[127692]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:33:00 compute-0 sudo[127690]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:33:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:00.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:33:00 compute-0 sudo[127842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxwpjsnahffhaiyhtfrumlcxgotpfqct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038380.3512385-158-79754268411268/AnsiballZ_file.py'
Jan 21 23:33:00 compute-0 sudo[127842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:00 compute-0 ceph-mon[74318]: pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:01 compute-0 python3.9[127844]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:33:01 compute-0 sudo[127842]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:33:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:01.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:33:01 compute-0 sshd-session[126873]: Connection closed by 192.168.122.30 port 34244
Jan 21 23:33:01 compute-0 sshd-session[126870]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:33:01 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Jan 21 23:33:01 compute-0 systemd[1]: session-42.scope: Consumed 4.361s CPU time.
Jan 21 23:33:01 compute-0 systemd-logind[786]: Session 42 logged out. Waiting for processes to exit.
Jan 21 23:33:01 compute-0 systemd-logind[786]: Removed session 42.
Jan 21 23:33:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:33:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:02.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:02 compute-0 ceph-mon[74318]: pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:03.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:04.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:04 compute-0 ceph-mon[74318]: pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:05.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:33:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:06.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:33:06 compute-0 sshd-session[127872]: Accepted publickey for zuul from 192.168.122.30 port 34280 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:33:06 compute-0 systemd-logind[786]: New session 43 of user zuul.
Jan 21 23:33:06 compute-0 systemd[1]: Started Session 43 of User zuul.
Jan 21 23:33:06 compute-0 sshd-session[127872]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:33:06 compute-0 ceph-mon[74318]: pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:07.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:33:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:07 compute-0 python3.9[128026]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:33:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:33:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:08.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:33:08 compute-0 sudo[128180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eicczyorzvqwnmudazubmxtycgjgmize ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038388.543183-62-108046361543643/AnsiballZ_setup.py'
Jan 21 23:33:08 compute-0 sudo[128180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:09 compute-0 ceph-mon[74318]: pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:09.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:33:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:33:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:33:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:33:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:33:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:33:09 compute-0 python3.9[128182]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:33:09 compute-0 sudo[128180]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:09 compute-0 sudo[128265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxixkgshkigcwefrkjyqlxrnaircdlaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038388.543183-62-108046361543643/AnsiballZ_dnf.py'
Jan 21 23:33:09 compute-0 sudo[128265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:10 compute-0 python3.9[128267]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 21 23:33:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:10.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:11 compute-0 ceph-mon[74318]: pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:33:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:11.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:33:11 compute-0 sudo[128265]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:33:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:12.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:12 compute-0 python3.9[128419]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:33:13 compute-0 ceph-mon[74318]: pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:33:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:13.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:33:13 compute-0 python3.9[128571]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 23:33:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:14.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:14 compute-0 python3.9[128721]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:33:15 compute-0 ceph-mon[74318]: pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:33:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:15.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:33:15 compute-0 python3.9[128872]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:33:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:16 compute-0 sshd-session[127875]: Connection closed by 192.168.122.30 port 34280
Jan 21 23:33:16 compute-0 sshd-session[127872]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:33:16 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Jan 21 23:33:16 compute-0 systemd[1]: session-43.scope: Consumed 6.628s CPU time.
Jan 21 23:33:16 compute-0 systemd-logind[786]: Session 43 logged out. Waiting for processes to exit.
Jan 21 23:33:16 compute-0 systemd-logind[786]: Removed session 43.
Jan 21 23:33:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:33:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:16.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:33:16 compute-0 sudo[128897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:33:16 compute-0 sudo[128897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:16 compute-0 sudo[128897]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:16 compute-0 sudo[128922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:33:16 compute-0 sudo[128922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:16 compute-0 sudo[128922]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:17 compute-0 ceph-mon[74318]: pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:17.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:33:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:18.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:19 compute-0 ceph-mon[74318]: pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:19.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:20.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:21.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:21 compute-0 ceph-mon[74318]: pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:21 compute-0 sshd-session[128950]: Accepted publickey for zuul from 192.168.122.30 port 41366 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:33:21 compute-0 systemd-logind[786]: New session 44 of user zuul.
Jan 21 23:33:21 compute-0 systemd[1]: Started Session 44 of User zuul.
Jan 21 23:33:21 compute-0 sshd-session[128950]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:33:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:33:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:33:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:22.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:33:23 compute-0 python3.9[129103]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:33:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:23.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:23 compute-0 ceph-mon[74318]: pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:24.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:24 compute-0 sudo[129258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-panepwaxeudoxvtrfjakxtjbkzbjskfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038404.1411822-110-143478808562221/AnsiballZ_file.py'
Jan 21 23:33:24 compute-0 sudo[129258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:24 compute-0 python3.9[129260]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:33:24 compute-0 sudo[129258]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:25.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:25 compute-0 ceph-mon[74318]: pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:25 compute-0 sudo[129411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ottgukbcduiojlievprlewlrmemwrnzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038405.1610227-110-18186599005849/AnsiballZ_file.py'
Jan 21 23:33:25 compute-0 sudo[129411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:25 compute-0 python3.9[129413]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:33:25 compute-0 sudo[129411]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:26 compute-0 sudo[129563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkypnuqxnrugbtpafvbmsewdckyoqbyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038405.9852617-160-269707247484133/AnsiballZ_stat.py'
Jan 21 23:33:26 compute-0 sudo[129563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:33:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:26.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:33:26 compute-0 python3.9[129565]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:33:26 compute-0 sudo[129563]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:27 compute-0 sudo[129686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwfysoeveeyjswsgepnlwsvbqhhamedj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038405.9852617-160-269707247484133/AnsiballZ_copy.py'
Jan 21 23:33:27 compute-0 sudo[129686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:27.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:27 compute-0 python3.9[129688]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038405.9852617-160-269707247484133/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=64844ede6f67516ce5317cda201485f39f8dd2c0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:33:27 compute-0 ceph-mon[74318]: pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:27 compute-0 sudo[129686]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:33:27 compute-0 sudo[129737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:33:27 compute-0 sudo[129737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:27 compute-0 sudo[129737]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:27 compute-0 sudo[129791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:33:27 compute-0 sudo[129791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:27 compute-0 sudo[129791]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:27 compute-0 sudo[129839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:33:27 compute-0 sudo[129839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:27 compute-0 sudo[129839]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:27 compute-0 sudo[129864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:33:27 compute-0 sudo[129864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:27 compute-0 sudo[129939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxbhhmrryayrzjgrxxlroersobuxsbvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038407.4322534-160-59017131605792/AnsiballZ_stat.py'
Jan 21 23:33:27 compute-0 sudo[129939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:27 compute-0 python3.9[129941]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:33:27 compute-0 sudo[129939]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:28 compute-0 sudo[129864]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:28 compute-0 ceph-mon[74318]: pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:28 compute-0 sudo[130093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkxgogcqrachzobdzaxhlrklkxgsnfco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038407.4322534-160-59017131605792/AnsiballZ_copy.py'
Jan 21 23:33:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:28 compute-0 sudo[130093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:28.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:28 compute-0 python3.9[130095]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038407.4322534-160-59017131605792/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=f94adaf3fbedc07d454e384403a116f5c456c0ae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:33:28 compute-0 sudo[130093]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:29 compute-0 sudo[130245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvvahjucnrsyfljdnvleemazvujkxdqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038408.7408195-160-266147357431603/AnsiballZ_stat.py'
Jan 21 23:33:29 compute-0 sudo[130245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:33:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:29.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:33:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:33:29 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:33:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:33:29 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:33:29 compute-0 python3.9[130247]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:33:29 compute-0 sudo[130245]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:29 compute-0 sudo[130369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmubxzsfigdqbiuhwccphdzjmhgyusnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038408.7408195-160-266147357431603/AnsiballZ_copy.py'
Jan 21 23:33:29 compute-0 sudo[130369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:30 compute-0 python3.9[130371]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038408.7408195-160-266147357431603/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f08f9ffabd19ac0daf459e8683d3358804506e42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:33:30 compute-0 sudo[130369]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:33:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:30.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:33:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:33:30 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:33:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:33:30 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:33:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:33:30 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:33:30 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 7adca60c-212d-4904-8bdf-6ed7cfc19564 does not exist
Jan 21 23:33:30 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev a03e7ac3-a220-4ce1-9002-82fcaaa9546d does not exist
Jan 21 23:33:30 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 072feef3-54b1-4f71-95e6-cdb30419b193 does not exist
Jan 21 23:33:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:33:30 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:33:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:33:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:33:30 compute-0 ceph-mon[74318]: pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:33:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:33:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:33:30 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:33:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:33:30 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:33:30 compute-0 sudo[130448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:33:30 compute-0 sudo[130448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:30 compute-0 sudo[130448]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:30 compute-0 sudo[130473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:33:30 compute-0 sudo[130473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:30 compute-0 sudo[130473]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:30 compute-0 sudo[130498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:33:30 compute-0 sudo[130498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:30 compute-0 sudo[130498]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:30 compute-0 sudo[130523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:33:30 compute-0 sudo[130523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:31 compute-0 sudo[130636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqcbfjxowhstpxagketlxjpdkqgsbkuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038410.392986-293-213510905842318/AnsiballZ_file.py'
Jan 21 23:33:31 compute-0 sudo[130636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:31.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:31 compute-0 podman[130663]: 2026-01-21 23:33:31.350456264 +0000 UTC m=+0.055798039 container create 50eeda0408bd44f2c902fd7a8c4e73e8501cdc4488339e2c4fd1200f22b1429d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 23:33:31 compute-0 systemd[1]: Started libpod-conmon-50eeda0408bd44f2c902fd7a8c4e73e8501cdc4488339e2c4fd1200f22b1429d.scope.
Jan 21 23:33:31 compute-0 python3.9[130648]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:33:31 compute-0 podman[130663]: 2026-01-21 23:33:31.320424455 +0000 UTC m=+0.025766300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:33:31 compute-0 sudo[130636]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:31 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:33:31 compute-0 podman[130663]: 2026-01-21 23:33:31.455136119 +0000 UTC m=+0.160477964 container init 50eeda0408bd44f2c902fd7a8c4e73e8501cdc4488339e2c4fd1200f22b1429d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:33:31 compute-0 podman[130663]: 2026-01-21 23:33:31.470118692 +0000 UTC m=+0.175460487 container start 50eeda0408bd44f2c902fd7a8c4e73e8501cdc4488339e2c4fd1200f22b1429d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 21 23:33:31 compute-0 podman[130663]: 2026-01-21 23:33:31.474446633 +0000 UTC m=+0.179788398 container attach 50eeda0408bd44f2c902fd7a8c4e73e8501cdc4488339e2c4fd1200f22b1429d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 21 23:33:31 compute-0 stoic_jang[130679]: 167 167
Jan 21 23:33:31 compute-0 systemd[1]: libpod-50eeda0408bd44f2c902fd7a8c4e73e8501cdc4488339e2c4fd1200f22b1429d.scope: Deactivated successfully.
Jan 21 23:33:31 compute-0 podman[130663]: 2026-01-21 23:33:31.480144175 +0000 UTC m=+0.185485930 container died 50eeda0408bd44f2c902fd7a8c4e73e8501cdc4488339e2c4fd1200f22b1429d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:33:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4d0cd77342927e4323737ab9e5c175a6697abfa5321fe3d7cda15151167c13d-merged.mount: Deactivated successfully.
Jan 21 23:33:31 compute-0 podman[130663]: 2026-01-21 23:33:31.529376335 +0000 UTC m=+0.234718090 container remove 50eeda0408bd44f2c902fd7a8c4e73e8501cdc4488339e2c4fd1200f22b1429d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:33:31 compute-0 systemd[1]: libpod-conmon-50eeda0408bd44f2c902fd7a8c4e73e8501cdc4488339e2c4fd1200f22b1429d.scope: Deactivated successfully.
Jan 21 23:33:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:33:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:33:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:33:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:33:31 compute-0 podman[130780]: 2026-01-21 23:33:31.729310621 +0000 UTC m=+0.058354966 container create e0fefe47d4a2ca5b87a1019df4d724b604acb3d2f6bcc22beaf2c4bbe3e2d4eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 23:33:31 compute-0 systemd[1]: Started libpod-conmon-e0fefe47d4a2ca5b87a1019df4d724b604acb3d2f6bcc22beaf2c4bbe3e2d4eb.scope.
Jan 21 23:33:31 compute-0 podman[130780]: 2026-01-21 23:33:31.699783799 +0000 UTC m=+0.028828244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:33:31 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/241f18be48c6964933e6ddc726d76c1548d9014482fb26fc59762104c562c4bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/241f18be48c6964933e6ddc726d76c1548d9014482fb26fc59762104c562c4bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/241f18be48c6964933e6ddc726d76c1548d9014482fb26fc59762104c562c4bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/241f18be48c6964933e6ddc726d76c1548d9014482fb26fc59762104c562c4bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/241f18be48c6964933e6ddc726d76c1548d9014482fb26fc59762104c562c4bf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:33:31 compute-0 podman[130780]: 2026-01-21 23:33:31.841645049 +0000 UTC m=+0.170689414 container init e0fefe47d4a2ca5b87a1019df4d724b604acb3d2f6bcc22beaf2c4bbe3e2d4eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 23:33:31 compute-0 podman[130780]: 2026-01-21 23:33:31.85489384 +0000 UTC m=+0.183938235 container start e0fefe47d4a2ca5b87a1019df4d724b604acb3d2f6bcc22beaf2c4bbe3e2d4eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 23:33:31 compute-0 podman[130780]: 2026-01-21 23:33:31.859233711 +0000 UTC m=+0.188278076 container attach e0fefe47d4a2ca5b87a1019df4d724b604acb3d2f6bcc22beaf2c4bbe3e2d4eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 21 23:33:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:31 compute-0 sudo[130875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahzbwbnoutlzoysogxxmjvpazwgaruvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038411.562679-293-268786892582117/AnsiballZ_file.py'
Jan 21 23:33:31 compute-0 sudo[130875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:32 compute-0 python3.9[130877]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:33:32 compute-0 sudo[130875]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:33:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:33:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:32.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:33:32 compute-0 ceph-mon[74318]: pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:32 compute-0 sudo[131031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdecytcxqgnnjovpcfugqzsuohgypihv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038412.365639-348-92103641866656/AnsiballZ_stat.py'
Jan 21 23:33:32 compute-0 sudo[131031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:32 compute-0 great_joliot[130832]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:33:32 compute-0 great_joliot[130832]: --> relative data size: 1.0
Jan 21 23:33:32 compute-0 great_joliot[130832]: --> All data devices are unavailable
Jan 21 23:33:32 compute-0 systemd[1]: libpod-e0fefe47d4a2ca5b87a1019df4d724b604acb3d2f6bcc22beaf2c4bbe3e2d4eb.scope: Deactivated successfully.
Jan 21 23:33:32 compute-0 conmon[130832]: conmon e0fefe47d4a2ca5b87a1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0fefe47d4a2ca5b87a1019df4d724b604acb3d2f6bcc22beaf2c4bbe3e2d4eb.scope/container/memory.events
Jan 21 23:33:32 compute-0 podman[130780]: 2026-01-21 23:33:32.829435684 +0000 UTC m=+1.158480039 container died e0fefe47d4a2ca5b87a1019df4d724b604acb3d2f6bcc22beaf2c4bbe3e2d4eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 23:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-241f18be48c6964933e6ddc726d76c1548d9014482fb26fc59762104c562c4bf-merged.mount: Deactivated successfully.
Jan 21 23:33:32 compute-0 python3.9[131035]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:33:32 compute-0 podman[130780]: 2026-01-21 23:33:32.908189836 +0000 UTC m=+1.237234191 container remove e0fefe47d4a2ca5b87a1019df4d724b604acb3d2f6bcc22beaf2c4bbe3e2d4eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:33:32 compute-0 systemd[1]: libpod-conmon-e0fefe47d4a2ca5b87a1019df4d724b604acb3d2f6bcc22beaf2c4bbe3e2d4eb.scope: Deactivated successfully.
Jan 21 23:33:32 compute-0 sudo[131031]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:32 compute-0 sudo[130523]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:33 compute-0 sudo[131054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:33:33 compute-0 sudo[131054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:33 compute-0 sudo[131054]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:33 compute-0 sudo[131102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:33:33 compute-0 sudo[131102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:33 compute-0 sudo[131102]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:33 compute-0 sudo[131151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:33:33 compute-0 sudo[131151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:33 compute-0 sudo[131151]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:33 compute-0 sudo[131199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:33:33 compute-0 sudo[131199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:33.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:33 compute-0 sudo[131274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfgjhtmojlhksszjxjhypynpejlsznxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038412.365639-348-92103641866656/AnsiballZ_copy.py'
Jan 21 23:33:33 compute-0 sudo[131274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:33 compute-0 python3.9[131277]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038412.365639-348-92103641866656/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=31de5db8cf07155457b0b8d47cc532434c195c89 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:33:33 compute-0 sudo[131274]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:33 compute-0 podman[131343]: 2026-01-21 23:33:33.660378186 +0000 UTC m=+0.046845639 container create f0108b5cb6e6ea68fc345d87105e72d4846159834ed74f585131528c83813484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:33:33 compute-0 systemd[1]: Started libpod-conmon-f0108b5cb6e6ea68fc345d87105e72d4846159834ed74f585131528c83813484.scope.
Jan 21 23:33:33 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:33:33 compute-0 podman[131343]: 2026-01-21 23:33:33.640550206 +0000 UTC m=+0.027017659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:33:33 compute-0 podman[131343]: 2026-01-21 23:33:33.753297136 +0000 UTC m=+0.139764589 container init f0108b5cb6e6ea68fc345d87105e72d4846159834ed74f585131528c83813484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:33:33 compute-0 podman[131343]: 2026-01-21 23:33:33.761675619 +0000 UTC m=+0.148143062 container start f0108b5cb6e6ea68fc345d87105e72d4846159834ed74f585131528c83813484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 21 23:33:33 compute-0 podman[131343]: 2026-01-21 23:33:33.76500492 +0000 UTC m=+0.151472383 container attach f0108b5cb6e6ea68fc345d87105e72d4846159834ed74f585131528c83813484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:33:33 compute-0 great_shannon[131384]: 167 167
Jan 21 23:33:33 compute-0 systemd[1]: libpod-f0108b5cb6e6ea68fc345d87105e72d4846159834ed74f585131528c83813484.scope: Deactivated successfully.
Jan 21 23:33:33 compute-0 podman[131343]: 2026-01-21 23:33:33.769147636 +0000 UTC m=+0.155615119 container died f0108b5cb6e6ea68fc345d87105e72d4846159834ed74f585131528c83813484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:33:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c33e10c8691565c0c1bfc08044dc69fec9976d540358d78bf900b3b1ed621ec2-merged.mount: Deactivated successfully.
Jan 21 23:33:33 compute-0 podman[131343]: 2026-01-21 23:33:33.811402624 +0000 UTC m=+0.197870077 container remove f0108b5cb6e6ea68fc345d87105e72d4846159834ed74f585131528c83813484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 23:33:33 compute-0 systemd[1]: libpod-conmon-f0108b5cb6e6ea68fc345d87105e72d4846159834ed74f585131528c83813484.scope: Deactivated successfully.
Jan 21 23:33:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:33 compute-0 podman[131482]: 2026-01-21 23:33:33.989953784 +0000 UTC m=+0.047326523 container create f7f2efee6f1734d2a755bbc155cedebcf08946c7bfb834537d8998fff3dc41cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_faraday, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 21 23:33:33 compute-0 sudo[131521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbmeeoxqivmwdigzafatqtmtnmiifdvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038413.6952283-348-213882715953334/AnsiballZ_stat.py'
Jan 21 23:33:34 compute-0 sudo[131521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:34 compute-0 systemd[1]: Started libpod-conmon-f7f2efee6f1734d2a755bbc155cedebcf08946c7bfb834537d8998fff3dc41cc.scope.
Jan 21 23:33:34 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:33:34 compute-0 podman[131482]: 2026-01-21 23:33:33.969938538 +0000 UTC m=+0.027311287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c120fb1cb1d643d7c42b22d71073a80a1a99fe13da968885adc8f12e64c8531/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c120fb1cb1d643d7c42b22d71073a80a1a99fe13da968885adc8f12e64c8531/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c120fb1cb1d643d7c42b22d71073a80a1a99fe13da968885adc8f12e64c8531/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c120fb1cb1d643d7c42b22d71073a80a1a99fe13da968885adc8f12e64c8531/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:33:34 compute-0 podman[131482]: 2026-01-21 23:33:34.08375908 +0000 UTC m=+0.141131829 container init f7f2efee6f1734d2a755bbc155cedebcf08946c7bfb834537d8998fff3dc41cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_faraday, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:33:34 compute-0 podman[131482]: 2026-01-21 23:33:34.092820554 +0000 UTC m=+0.150193283 container start f7f2efee6f1734d2a755bbc155cedebcf08946c7bfb834537d8998fff3dc41cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_faraday, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:33:34 compute-0 podman[131482]: 2026-01-21 23:33:34.096574918 +0000 UTC m=+0.153947667 container attach f7f2efee6f1734d2a755bbc155cedebcf08946c7bfb834537d8998fff3dc41cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 23:33:34 compute-0 python3.9[131523]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:33:34 compute-0 sudo[131521]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:34.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:34 compute-0 sudo[131652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqxcezqapzompoovtwvrcnthhovhnraj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038413.6952283-348-213882715953334/AnsiballZ_copy.py'
Jan 21 23:33:34 compute-0 sudo[131652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:34 compute-0 python3.9[131654]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038413.6952283-348-213882715953334/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=bd8671e34ffbddf64a3ff30c0d7a4c74c6757136 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:33:34 compute-0 fervent_faraday[131527]: {
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:     "1": [
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:         {
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:             "devices": [
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:                 "/dev/loop3"
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:             ],
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:             "lv_name": "ceph_lv0",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:             "lv_size": "7511998464",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:             "name": "ceph_lv0",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:             "tags": {
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:                 "ceph.cluster_name": "ceph",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:                 "ceph.crush_device_class": "",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:                 "ceph.encrypted": "0",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:                 "ceph.osd_id": "1",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:                 "ceph.type": "block",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:                 "ceph.vdo": "0"
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:             },
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:             "type": "block",
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:             "vg_name": "ceph_vg0"
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:         }
Jan 21 23:33:34 compute-0 fervent_faraday[131527]:     ]
Jan 21 23:33:34 compute-0 fervent_faraday[131527]: }
Jan 21 23:33:34 compute-0 sudo[131652]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:34 compute-0 podman[131482]: 2026-01-21 23:33:34.83527553 +0000 UTC m=+0.892648259 container died f7f2efee6f1734d2a755bbc155cedebcf08946c7bfb834537d8998fff3dc41cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 21 23:33:34 compute-0 systemd[1]: libpod-f7f2efee6f1734d2a755bbc155cedebcf08946c7bfb834537d8998fff3dc41cc.scope: Deactivated successfully.
Jan 21 23:33:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c120fb1cb1d643d7c42b22d71073a80a1a99fe13da968885adc8f12e64c8531-merged.mount: Deactivated successfully.
Jan 21 23:33:34 compute-0 podman[131482]: 2026-01-21 23:33:34.89845425 +0000 UTC m=+0.955826979 container remove f7f2efee6f1734d2a755bbc155cedebcf08946c7bfb834537d8998fff3dc41cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_faraday, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 21 23:33:34 compute-0 systemd[1]: libpod-conmon-f7f2efee6f1734d2a755bbc155cedebcf08946c7bfb834537d8998fff3dc41cc.scope: Deactivated successfully.
Jan 21 23:33:34 compute-0 sudo[131199]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:34 compute-0 ceph-mon[74318]: pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:35 compute-0 sudo[131708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:33:35 compute-0 sudo[131708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:35 compute-0 sudo[131708]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:35 compute-0 sudo[131766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:33:35 compute-0 sudo[131766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:35 compute-0 sudo[131766]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:35 compute-0 sudo[131818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:33:35 compute-0 sudo[131818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:35 compute-0 sudo[131818]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:35 compute-0 sudo[131865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:33:35 compute-0 sudo[131865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:35.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:35 compute-0 sudo[131920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmazoduzvlmenmxxqbfhgakwtrjzfbxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038414.9662957-348-55900638228719/AnsiballZ_stat.py'
Jan 21 23:33:35 compute-0 sudo[131920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:35 compute-0 python3.9[131922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:33:35 compute-0 sudo[131920]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:35 compute-0 podman[131981]: 2026-01-21 23:33:35.586578973 +0000 UTC m=+0.058934934 container create cdc5bfcd68d780f5281d8aa89c72ab8044ee7d7ff8ec25a8f4aa1881705b0513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_keller, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:33:35 compute-0 systemd[1]: Started libpod-conmon-cdc5bfcd68d780f5281d8aa89c72ab8044ee7d7ff8ec25a8f4aa1881705b0513.scope.
Jan 21 23:33:35 compute-0 podman[131981]: 2026-01-21 23:33:35.558136632 +0000 UTC m=+0.030492673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:33:35 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:33:35 compute-0 podman[131981]: 2026-01-21 23:33:35.694382593 +0000 UTC m=+0.166738564 container init cdc5bfcd68d780f5281d8aa89c72ab8044ee7d7ff8ec25a8f4aa1881705b0513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 21 23:33:35 compute-0 podman[131981]: 2026-01-21 23:33:35.701634383 +0000 UTC m=+0.173990334 container start cdc5bfcd68d780f5281d8aa89c72ab8044ee7d7ff8ec25a8f4aa1881705b0513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 23:33:35 compute-0 romantic_keller[132027]: 167 167
Jan 21 23:33:35 compute-0 podman[131981]: 2026-01-21 23:33:35.70486325 +0000 UTC m=+0.177219201 container attach cdc5bfcd68d780f5281d8aa89c72ab8044ee7d7ff8ec25a8f4aa1881705b0513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:33:35 compute-0 systemd[1]: libpod-cdc5bfcd68d780f5281d8aa89c72ab8044ee7d7ff8ec25a8f4aa1881705b0513.scope: Deactivated successfully.
Jan 21 23:33:35 compute-0 podman[131981]: 2026-01-21 23:33:35.705880451 +0000 UTC m=+0.178236402 container died cdc5bfcd68d780f5281d8aa89c72ab8044ee7d7ff8ec25a8f4aa1881705b0513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:33:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d5cd5d82c98abcb446e8f6eb7608b88a2bd337d079c8e14fc328dc996e94043-merged.mount: Deactivated successfully.
Jan 21 23:33:35 compute-0 podman[131981]: 2026-01-21 23:33:35.739158477 +0000 UTC m=+0.211514428 container remove cdc5bfcd68d780f5281d8aa89c72ab8044ee7d7ff8ec25a8f4aa1881705b0513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 21 23:33:35 compute-0 systemd[1]: libpod-conmon-cdc5bfcd68d780f5281d8aa89c72ab8044ee7d7ff8ec25a8f4aa1881705b0513.scope: Deactivated successfully.
Jan 21 23:33:35 compute-0 sudo[132120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwcomgtbpnetrimhvuetmrlnzxmttkig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038414.9662957-348-55900638228719/AnsiballZ_copy.py'
Jan 21 23:33:35 compute-0 sudo[132120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:35 compute-0 podman[132126]: 2026-01-21 23:33:35.91185082 +0000 UTC m=+0.053662653 container create 27033752c8ec5a10894a88c2e7ff84eeeef6a5969f33697a7e7d35f1c3586a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gould, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:33:35 compute-0 systemd[1]: Started libpod-conmon-27033752c8ec5a10894a88c2e7ff84eeeef6a5969f33697a7e7d35f1c3586a56.scope.
Jan 21 23:33:35 compute-0 podman[132126]: 2026-01-21 23:33:35.880369338 +0000 UTC m=+0.022181211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:33:35 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:33:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4fe88f416ba8f32f4e5c1f9de0e2f18323b134c639ce583cb36f269403a915/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:33:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4fe88f416ba8f32f4e5c1f9de0e2f18323b134c639ce583cb36f269403a915/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:33:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4fe88f416ba8f32f4e5c1f9de0e2f18323b134c639ce583cb36f269403a915/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:33:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4fe88f416ba8f32f4e5c1f9de0e2f18323b134c639ce583cb36f269403a915/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:33:36 compute-0 podman[132126]: 2026-01-21 23:33:36.007328978 +0000 UTC m=+0.149140781 container init 27033752c8ec5a10894a88c2e7ff84eeeef6a5969f33697a7e7d35f1c3586a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gould, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 23:33:36 compute-0 podman[132126]: 2026-01-21 23:33:36.021043622 +0000 UTC m=+0.162855425 container start 27033752c8ec5a10894a88c2e7ff84eeeef6a5969f33697a7e7d35f1c3586a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gould, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:33:36 compute-0 podman[132126]: 2026-01-21 23:33:36.025328832 +0000 UTC m=+0.167140665 container attach 27033752c8ec5a10894a88c2e7ff84eeeef6a5969f33697a7e7d35f1c3586a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gould, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:33:36 compute-0 python3.9[132128]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038414.9662957-348-55900638228719/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=46cd41c5c0a0d8d4885e06ac2ab44e986dca2c03 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:33:36 compute-0 sudo[132120]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:33:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:36.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:33:36 compute-0 sudo[132255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:33:36 compute-0 sudo[132255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:36 compute-0 sudo[132255]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:36 compute-0 sudo[132336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwqftyilrrcygdtgyshfknhvfgpngmgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038416.331076-476-121680983986768/AnsiballZ_file.py'
Jan 21 23:33:36 compute-0 sudo[132336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:36 compute-0 sudo[132313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:33:36 compute-0 sudo[132313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:36 compute-0 sudo[132313]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:36 compute-0 keen_gould[132143]: {
Jan 21 23:33:36 compute-0 keen_gould[132143]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:33:36 compute-0 keen_gould[132143]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:33:36 compute-0 keen_gould[132143]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:33:36 compute-0 keen_gould[132143]:         "osd_id": 1,
Jan 21 23:33:36 compute-0 keen_gould[132143]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:33:36 compute-0 keen_gould[132143]:         "type": "bluestore"
Jan 21 23:33:36 compute-0 keen_gould[132143]:     }
Jan 21 23:33:36 compute-0 keen_gould[132143]: }
Jan 21 23:33:36 compute-0 python3.9[132351]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:33:36 compute-0 sudo[132336]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:36 compute-0 systemd[1]: libpod-27033752c8ec5a10894a88c2e7ff84eeeef6a5969f33697a7e7d35f1c3586a56.scope: Deactivated successfully.
Jan 21 23:33:36 compute-0 podman[132126]: 2026-01-21 23:33:36.936756618 +0000 UTC m=+1.078568421 container died 27033752c8ec5a10894a88c2e7ff84eeeef6a5969f33697a7e7d35f1c3586a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gould, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:33:36 compute-0 ceph-mon[74318]: pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e4fe88f416ba8f32f4e5c1f9de0e2f18323b134c639ce583cb36f269403a915-merged.mount: Deactivated successfully.
Jan 21 23:33:36 compute-0 podman[132126]: 2026-01-21 23:33:36.988625887 +0000 UTC m=+1.130437690 container remove 27033752c8ec5a10894a88c2e7ff84eeeef6a5969f33697a7e7d35f1c3586a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gould, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:33:36 compute-0 systemd[1]: libpod-conmon-27033752c8ec5a10894a88c2e7ff84eeeef6a5969f33697a7e7d35f1c3586a56.scope: Deactivated successfully.
Jan 21 23:33:37 compute-0 sudo[131865]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:33:37 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:33:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:33:37 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:33:37 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 708e3d56-6378-41ef-90ae-c76fe46d11a2 does not exist
Jan 21 23:33:37 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev cf8fefea-cda2-4d65-b00c-8bc2f1472cc5 does not exist
Jan 21 23:33:37 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 85339162-8d79-4306-9d07-b6e4cf111ea4 does not exist
Jan 21 23:33:37 compute-0 sudo[132412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:33:37 compute-0 sudo[132412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:37 compute-0 sudo[132412]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:37 compute-0 sudo[132472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:33:37 compute-0 sudo[132472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:37 compute-0 sudo[132472]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:37.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:33:37 compute-0 sudo[132580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyszgmrueyobqhiqggkvavxthujoitxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038417.1002467-476-46316299807357/AnsiballZ_file.py'
Jan 21 23:33:37 compute-0 sudo[132580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:37 compute-0 python3.9[132582]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:33:37 compute-0 sudo[132580]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:38 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:33:38 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:33:38 compute-0 sudo[132732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azcarrqqbwxbmuovkaapnvfplpepxtlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038417.848185-524-162375292121459/AnsiballZ_stat.py'
Jan 21 23:33:38 compute-0 sudo[132732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:38 compute-0 python3.9[132734]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:33:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:33:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:38.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:33:38 compute-0 sudo[132732]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:38 compute-0 sudo[132855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snvgnpnhdytujskyqrydkhqyagdfyctl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038417.848185-524-162375292121459/AnsiballZ_copy.py'
Jan 21 23:33:38 compute-0 sudo[132855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:38 compute-0 python3.9[132857]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038417.848185-524-162375292121459/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=1ccccfd523079d74f3821f8028331d60d450014e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:33:39 compute-0 sudo[132855]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:39 compute-0 ceph-mon[74318]: pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:33:39
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'volumes', 'backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'images']
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:33:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:39.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:33:39 compute-0 sudo[133008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkubxsjfdnewteipbhtlnotvajnhznca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038419.134767-524-217261299645938/AnsiballZ_stat.py'
Jan 21 23:33:39 compute-0 sudo[133008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:39 compute-0 python3.9[133010]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:33:39 compute-0 sudo[133008]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:40 compute-0 sudo[133131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdjxzlhyknytkpqkxpumwietijtprctq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038419.134767-524-217261299645938/AnsiballZ_copy.py'
Jan 21 23:33:40 compute-0 sudo[133131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:40 compute-0 python3.9[133133]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038419.134767-524-217261299645938/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=bd8671e34ffbddf64a3ff30c0d7a4c74c6757136 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:33:40 compute-0 sudo[133131]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:33:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:40.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:33:40 compute-0 sudo[133283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxmqsfwbgqbddfeybkvmcnqffaivuuff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038420.4564173-524-177618869121847/AnsiballZ_stat.py'
Jan 21 23:33:40 compute-0 sudo[133283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:41 compute-0 python3.9[133285]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:33:41 compute-0 sudo[133283]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:41 compute-0 ceph-mon[74318]: pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:33:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:41.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:33:41 compute-0 sudo[133407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmhnphtspofwrjpyupojrnjtrwyhvjik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038420.4564173-524-177618869121847/AnsiballZ_copy.py'
Jan 21 23:33:41 compute-0 sudo[133407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:41 compute-0 python3.9[133409]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038420.4564173-524-177618869121847/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=29223d8ea754042af6ce9ba5d8ff692f8c9ca6cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:33:41 compute-0 sudo[133407]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:33:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:42.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:42 compute-0 sudo[133559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfgqycvrgyzdkxbudpmuzmbduafiptox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038422.394618-703-120871102250948/AnsiballZ_file.py'
Jan 21 23:33:42 compute-0 sudo[133559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:42 compute-0 python3.9[133561]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:33:43 compute-0 sudo[133559]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:43 compute-0 ceph-mon[74318]: pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:33:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:43.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:33:43 compute-0 sudo[133712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amzdzaejtjpqkuznxbcwdkodruueqqid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038423.1892838-728-181715895152665/AnsiballZ_stat.py'
Jan 21 23:33:43 compute-0 sudo[133712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:43 compute-0 python3.9[133714]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:33:43 compute-0 sudo[133712]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:44 compute-0 sudo[133835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghyktuqukjtmppanignsvnvvovggqwvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038423.1892838-728-181715895152665/AnsiballZ_copy.py'
Jan 21 23:33:44 compute-0 sudo[133835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:44 compute-0 python3.9[133837]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038423.1892838-728-181715895152665/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e9e57f31efd3627d7bd35fbbf35e3ce75fb9748b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:33:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:33:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:44.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:33:44 compute-0 sudo[133835]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:45 compute-0 sudo[133987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmgsfnnjmedipzpbmuiiaeoppyxybbrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038424.6734366-777-124395874911134/AnsiballZ_file.py'
Jan 21 23:33:45 compute-0 sudo[133987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:45 compute-0 ceph-mon[74318]: pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:45 compute-0 python3.9[133989]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:33:45 compute-0 sudo[133987]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:45.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:45 compute-0 sudo[134140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izirxbsafaaezhmgdpofuvfpwbpwyucw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038425.462468-801-79667670192719/AnsiballZ_stat.py'
Jan 21 23:33:45 compute-0 sudo[134140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:45 compute-0 python3.9[134142]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:33:45 compute-0 sudo[134140]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:46 compute-0 sudo[134263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xarivjaqzlyypcqukejovwrrixcrecnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038425.462468-801-79667670192719/AnsiballZ_copy.py'
Jan 21 23:33:46 compute-0 sudo[134263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:33:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:46.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:33:46 compute-0 python3.9[134265]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038425.462468-801-79667670192719/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e9e57f31efd3627d7bd35fbbf35e3ce75fb9748b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:33:46 compute-0 sudo[134263]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:47 compute-0 ceph-mon[74318]: pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:47 compute-0 sudo[134415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uomqyoomgqjhkzpyeecymynailpbkuvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038426.819311-846-48532468651907/AnsiballZ_file.py'
Jan 21 23:33:47 compute-0 sudo[134415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:33:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:47.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:33:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:33:47 compute-0 python3.9[134417]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:33:47 compute-0 sudo[134415]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:47 compute-0 sudo[134568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guvrlujtiooisbyowclhenvqdfzxbrih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038427.5637188-867-8161334211402/AnsiballZ_stat.py'
Jan 21 23:33:47 compute-0 sudo[134568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:48 compute-0 python3.9[134570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:33:48 compute-0 sudo[134568]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:48.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:48 compute-0 sudo[134691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znyjdzoatszbdlzemkxtzlmvjorhcayd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038427.5637188-867-8161334211402/AnsiballZ_copy.py'
Jan 21 23:33:48 compute-0 sudo[134691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:48 compute-0 python3.9[134693]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038427.5637188-867-8161334211402/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e9e57f31efd3627d7bd35fbbf35e3ce75fb9748b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:33:48 compute-0 sudo[134691]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:49 compute-0 ceph-mon[74318]: pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:49.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:49 compute-0 sudo[134843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdlmgbtospeayikrqdtivoktbazkwmoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038428.972676-915-110490387120375/AnsiballZ_file.py'
Jan 21 23:33:49 compute-0 sudo[134843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:49 compute-0 python3.9[134846]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:33:49 compute-0 sudo[134843]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:50 compute-0 sudo[134996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjxgqnecyulbwqgimzqxtecarsbcczzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038429.8633597-940-31272878888687/AnsiballZ_stat.py'
Jan 21 23:33:50 compute-0 sudo[134996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:33:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:50.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:33:50 compute-0 python3.9[134998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:33:50 compute-0 sudo[134996]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:51 compute-0 sudo[135119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztxvfgoywtkniywgoymehvciykjzhhip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038429.8633597-940-31272878888687/AnsiballZ_copy.py'
Jan 21 23:33:51 compute-0 sudo[135119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:51 compute-0 ceph-mon[74318]: pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:51 compute-0 python3.9[135121]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038429.8633597-940-31272878888687/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e9e57f31efd3627d7bd35fbbf35e3ce75fb9748b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:33:51 compute-0 sudo[135119]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:51.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:51 compute-0 sudo[135272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgyiejpqamybnvnqsyxikpvgvykrugwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038431.4865017-990-166333732457902/AnsiballZ_file.py'
Jan 21 23:33:51 compute-0 sudo[135272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:52 compute-0 python3.9[135274]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:33:52 compute-0 sudo[135272]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:33:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:33:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:52.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:33:52 compute-0 sudo[135424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stjgnitrleysomlltvvuihirpuazothx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038432.2392697-1016-152470134473880/AnsiballZ_stat.py'
Jan 21 23:33:52 compute-0 sudo[135424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:52 compute-0 python3.9[135426]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:33:52 compute-0 sudo[135424]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:53 compute-0 sudo[135547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izwceyhofhbxokbmdvpusqcolkapyozp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038432.2392697-1016-152470134473880/AnsiballZ_copy.py'
Jan 21 23:33:53 compute-0 sudo[135547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:53 compute-0 ceph-mon[74318]: pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:33:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:53.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:33:53 compute-0 python3.9[135549]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038432.2392697-1016-152470134473880/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e9e57f31efd3627d7bd35fbbf35e3ce75fb9748b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:33:53 compute-0 sudo[135547]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:33:53 compute-0 sudo[135700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orusulzzjrlgzkzxqeljfvvmkrsnovgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038433.6807253-1062-40379787533565/AnsiballZ_file.py'
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:33:53 compute-0 sudo[135700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:33:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:33:54 compute-0 python3.9[135702]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:33:54 compute-0 sudo[135700]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:33:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:54.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:33:54 compute-0 sudo[135852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laaztcvelzmiaizmwgnmizntuoliqrsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038434.3842626-1085-218729285454445/AnsiballZ_stat.py'
Jan 21 23:33:54 compute-0 sudo[135852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:54 compute-0 python3.9[135854]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:33:54 compute-0 sudo[135852]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:55 compute-0 ceph-mon[74318]: pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:55.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:55 compute-0 sudo[135976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmbpenllztaihscyfzwubjgmtpxwpdqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038434.3842626-1085-218729285454445/AnsiballZ_copy.py'
Jan 21 23:33:55 compute-0 sudo[135976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:33:55 compute-0 python3.9[135978]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038434.3842626-1085-218729285454445/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e9e57f31efd3627d7bd35fbbf35e3ce75fb9748b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:33:55 compute-0 sudo[135976]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:55 compute-0 sshd-session[128953]: Connection closed by 192.168.122.30 port 41366
Jan 21 23:33:55 compute-0 sshd-session[128950]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:33:55 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Jan 21 23:33:55 compute-0 systemd[1]: session-44.scope: Consumed 26.121s CPU time.
Jan 21 23:33:55 compute-0 systemd-logind[786]: Session 44 logged out. Waiting for processes to exit.
Jan 21 23:33:55 compute-0 systemd-logind[786]: Removed session 44.
Jan 21 23:33:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:33:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:56.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:33:56 compute-0 sudo[136003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:33:56 compute-0 sudo[136003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:56 compute-0 sudo[136003]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:56 compute-0 sudo[136028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:33:56 compute-0 sudo[136028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:33:56 compute-0 sudo[136028]: pam_unix(sudo:session): session closed for user root
Jan 21 23:33:57 compute-0 ceph-mon[74318]: pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:57.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:33:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:33:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:33:58.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:33:59 compute-0 ceph-mon[74318]: pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:33:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:33:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:33:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:33:59.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:33:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:34:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:00.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:34:01 compute-0 ceph-mon[74318]: pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:01.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:01 compute-0 sshd-session[136055]: Accepted publickey for zuul from 192.168.122.30 port 52650 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:34:01 compute-0 systemd-logind[786]: New session 45 of user zuul.
Jan 21 23:34:01 compute-0 systemd[1]: Started Session 45 of User zuul.
Jan 21 23:34:01 compute-0 sshd-session[136055]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:34:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:02 compute-0 sudo[136209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnnhdpnhtcimlvsddedygknsnjoslasl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038441.5230303-26-265298494536071/AnsiballZ_file.py'
Jan 21 23:34:02 compute-0 sudo[136209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:02 compute-0 python3.9[136211]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:02 compute-0 sudo[136209]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:34:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:02.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:02 compute-0 sudo[136361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfouxnxxtvqourveorontndeboweobtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038442.4945664-62-116219358723724/AnsiballZ_stat.py'
Jan 21 23:34:02 compute-0 sudo[136361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:03 compute-0 python3.9[136363]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:34:03 compute-0 sudo[136361]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:03 compute-0 ceph-mon[74318]: pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:03.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:03 compute-0 sudo[136485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umhzjcgqjbtdrpbmwfnzsjjkulnioaap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038442.4945664-62-116219358723724/AnsiballZ_copy.py'
Jan 21 23:34:03 compute-0 sudo[136485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:04 compute-0 python3.9[136487]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769038442.4945664-62-116219358723724/.source.conf _original_basename=ceph.conf follow=False checksum=f7b57b8362e0dcb6b2b157816b0ee4adaf22f2c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:04 compute-0 sudo[136485]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:04 compute-0 ceph-mon[74318]: pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:34:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:04.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:34:04 compute-0 sudo[136637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puermuganttxqrgeofxhhtqvqrskbzym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038444.2237911-62-177622627262493/AnsiballZ_stat.py'
Jan 21 23:34:04 compute-0 sudo[136637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:04 compute-0 python3.9[136639]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:34:04 compute-0 sudo[136637]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:05 compute-0 sudo[136760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwhzhxbgttnhinireqehogpzhfgdjcex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038444.2237911-62-177622627262493/AnsiballZ_copy.py'
Jan 21 23:34:05 compute-0 sudo[136760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:05.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:05 compute-0 python3.9[136762]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769038444.2237911-62-177622627262493/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=f25b484d050c82fa53bbf5f0ee2ad75e8c75c1da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:05 compute-0 sudo[136760]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:05 compute-0 sshd-session[136059]: Connection closed by 192.168.122.30 port 52650
Jan 21 23:34:05 compute-0 sshd-session[136055]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:34:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:05 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Jan 21 23:34:05 compute-0 systemd[1]: session-45.scope: Consumed 3.125s CPU time.
Jan 21 23:34:05 compute-0 systemd-logind[786]: Session 45 logged out. Waiting for processes to exit.
Jan 21 23:34:05 compute-0 systemd-logind[786]: Removed session 45.
Jan 21 23:34:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:06.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:06 compute-0 ceph-mon[74318]: pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:07.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:34:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:34:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:08.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:34:08 compute-0 ceph-mon[74318]: pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:34:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:34:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:34:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:34:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:34:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:34:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:09.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:10.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:11 compute-0 ceph-mon[74318]: pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:34:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:11.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:34:11 compute-0 sshd-session[136791]: Accepted publickey for zuul from 192.168.122.30 port 33432 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:34:11 compute-0 systemd-logind[786]: New session 46 of user zuul.
Jan 21 23:34:11 compute-0 systemd[1]: Started Session 46 of User zuul.
Jan 21 23:34:11 compute-0 sshd-session[136791]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:34:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:34:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:12.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:12 compute-0 python3.9[136944]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:34:13 compute-0 ceph-mon[74318]: pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:13.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:13 compute-0 sudo[137099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldsofyoqspkuvhiyvuksegexewskjkce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038453.223642-62-159084651708563/AnsiballZ_file.py'
Jan 21 23:34:13 compute-0 sudo[137099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:13 compute-0 python3.9[137101]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:34:14 compute-0 sudo[137099]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:14.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:14 compute-0 sudo[137251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cijshhixqysaypwozvuwmzxqqopgwisd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038454.1498861-62-28688575799679/AnsiballZ_file.py'
Jan 21 23:34:14 compute-0 sudo[137251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:14 compute-0 python3.9[137253]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:34:14 compute-0 sudo[137251]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:15 compute-0 ceph-mon[74318]: pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:15.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:15 compute-0 python3.9[137403]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:34:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:16 compute-0 sudo[137554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgjbrouxqqquknlekaajhnudligwkmhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038455.798853-131-161999741172971/AnsiballZ_seboolean.py'
Jan 21 23:34:16 compute-0 sudo[137554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:16.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:16 compute-0 python3.9[137556]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 21 23:34:16 compute-0 ceph-mon[74318]: pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:17 compute-0 sudo[137557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:34:17 compute-0 sudo[137557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:17 compute-0 sudo[137557]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:17 compute-0 sudo[137582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:34:17 compute-0 sudo[137582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:17 compute-0 sudo[137582]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:17.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:34:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:18 compute-0 sudo[137554]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:18.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:18 compute-0 sudo[137761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txawmicerrinhgkdcqmmwfoynckrnrdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038458.3814538-161-159241745279063/AnsiballZ_setup.py'
Jan 21 23:34:18 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 21 23:34:18 compute-0 sudo[137761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:19 compute-0 python3.9[137763]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:34:19 compute-0 ceph-mon[74318]: pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:34:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:19.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:34:19 compute-0 sudo[137761]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:19 compute-0 sudo[137846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhxqgvckamcvychwgmoehnapuapsycsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038458.3814538-161-159241745279063/AnsiballZ_dnf.py'
Jan 21 23:34:19 compute-0 sudo[137846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:20 compute-0 python3.9[137848]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:34:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:20.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:21 compute-0 ceph-mon[74318]: pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:34:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:21.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:34:21 compute-0 sudo[137846]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:22 compute-0 sudo[138000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arqkpmfdapadyzdhituwvolyguflkwne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038461.6758735-197-6938967647584/AnsiballZ_systemd.py'
Jan 21 23:34:22 compute-0 sudo[138000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:34:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:22.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:22 compute-0 python3.9[138002]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 23:34:22 compute-0 sudo[138000]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:23 compute-0 ceph-mon[74318]: pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:34:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:23.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:34:23 compute-0 sudo[138156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnmzjmrvarmqqpxgdvtvfhveciomeais ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769038462.9111733-221-61789574734588/AnsiballZ_edpm_nftables_snippet.py'
Jan 21 23:34:23 compute-0 sudo[138156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:23 compute-0 python3[138158]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 21 23:34:23 compute-0 sudo[138156]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:24 compute-0 sudo[138308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bacpqyrhrjjckiyuuqsnwzbqexumfdtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038463.956352-248-232645358828718/AnsiballZ_file.py'
Jan 21 23:34:24 compute-0 sudo[138308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:24.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:24 compute-0 python3.9[138310]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:24 compute-0 sudo[138308]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:25 compute-0 ceph-mon[74318]: pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:25 compute-0 sudo[138460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrliitjooxdqpvtdhvqritkqwyndklhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038464.740611-272-134060340680518/AnsiballZ_stat.py'
Jan 21 23:34:25 compute-0 sudo[138460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:25.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:25 compute-0 python3.9[138462]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:34:25 compute-0 sudo[138460]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:25 compute-0 sudo[138539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucsuwnorobaectuppljdtgqgihazqbkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038464.740611-272-134060340680518/AnsiballZ_file.py'
Jan 21 23:34:25 compute-0 sudo[138539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:25 compute-0 python3.9[138541]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:25 compute-0 sudo[138539]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:34:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:26.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:34:26 compute-0 sudo[138691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukmiiewoagezalexvfysdqjwxnwrnxca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038466.2887864-308-215187895776650/AnsiballZ_stat.py'
Jan 21 23:34:26 compute-0 sudo[138691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:26 compute-0 python3.9[138693]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:34:26 compute-0 sudo[138691]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:27 compute-0 sudo[138769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlhnndddgnyafbyhurqdvwzystoiqrqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038466.2887864-308-215187895776650/AnsiballZ_file.py'
Jan 21 23:34:27 compute-0 sudo[138769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:27 compute-0 ceph-mon[74318]: pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:27.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:34:27 compute-0 python3.9[138771]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.fk8zhkv0 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:27 compute-0 sudo[138769]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:28 compute-0 sudo[138922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvhuyacuytksfkmveroryjlsvqmrucxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038467.5520873-344-45128550926916/AnsiballZ_stat.py'
Jan 21 23:34:28 compute-0 sudo[138922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:34:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:28.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:34:28 compute-0 python3.9[138924]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:34:28 compute-0 sudo[138922]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:28 compute-0 sudo[139000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obouesdlireajxxdecdctwowadjibjnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038467.5520873-344-45128550926916/AnsiballZ_file.py'
Jan 21 23:34:28 compute-0 sudo[139000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:29 compute-0 python3.9[139002]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:29 compute-0 sudo[139000]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:29 compute-0 ceph-mon[74318]: pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:29.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:29 compute-0 sudo[139153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stzhkvvcgnoczeziuzxvhnqtshdlseht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038469.342649-383-223975857431341/AnsiballZ_command.py'
Jan 21 23:34:29 compute-0 sudo[139153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:30 compute-0 python3.9[139155]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:34:30 compute-0 sudo[139153]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:34:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:30.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:34:31 compute-0 sudo[139306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nagnvbidykmmlwjtrmzxjburmpdmizqt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769038470.6772835-407-171366631968606/AnsiballZ_edpm_nftables_from_files.py'
Jan 21 23:34:31 compute-0 sudo[139306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:31 compute-0 ceph-mon[74318]: pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:31.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:31 compute-0 python3[139308]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 21 23:34:31 compute-0 sudo[139306]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:31 compute-0 sudo[139459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvjvcspcitigsayghhdfvekktdttnvux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038471.6296632-431-174988460975125/AnsiballZ_stat.py'
Jan 21 23:34:31 compute-0 sudo[139459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:32 compute-0 python3.9[139461]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:34:32 compute-0 sudo[139459]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:34:32 compute-0 ceph-mon[74318]: pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:32.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:32 compute-0 sudo[139584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnsmzylvnbuonocwfjdunxhjyoqwhdoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038471.6296632-431-174988460975125/AnsiballZ_copy.py'
Jan 21 23:34:32 compute-0 sudo[139584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:32 compute-0 python3.9[139586]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038471.6296632-431-174988460975125/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:32 compute-0 sudo[139584]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:33.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:33 compute-0 sudo[139737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utrnautecidixoickxioygbgjmvvetwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038473.1704426-476-154335156588040/AnsiballZ_stat.py'
Jan 21 23:34:33 compute-0 sudo[139737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:33 compute-0 python3.9[139739]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:34:33 compute-0 sudo[139737]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:34 compute-0 sudo[139862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rblpdfrbozsoyyvucuzfplxasjmgzotv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038473.1704426-476-154335156588040/AnsiballZ_copy.py'
Jan 21 23:34:34 compute-0 sudo[139862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:34 compute-0 python3.9[139864]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038473.1704426-476-154335156588040/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:34 compute-0 sudo[139862]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:34.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:34 compute-0 sudo[140014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asujaalmphjrshpyausjojndcvvpewxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038474.6082795-521-197487390482916/AnsiballZ_stat.py'
Jan 21 23:34:34 compute-0 sudo[140014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:34 compute-0 ceph-mon[74318]: pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:35 compute-0 python3.9[140016]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:34:35 compute-0 sudo[140014]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:35.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:35 compute-0 sudo[140140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahheiccdrdwxkkjdkxgzaeipyfastzap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038474.6082795-521-197487390482916/AnsiballZ_copy.py'
Jan 21 23:34:35 compute-0 sudo[140140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:35 compute-0 python3.9[140142]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038474.6082795-521-197487390482916/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:35 compute-0 sudo[140140]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:36 compute-0 sudo[140292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njozlqernaqgeekuvotgtdzivshwevqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038475.9781675-566-133716964886050/AnsiballZ_stat.py'
Jan 21 23:34:36 compute-0 sudo[140292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:34:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:36.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:34:36 compute-0 python3.9[140294]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:34:36 compute-0 sudo[140292]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:36 compute-0 sudo[140417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmszsobbfdfhywocpepgutxmkedbnvbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038475.9781675-566-133716964886050/AnsiballZ_copy.py'
Jan 21 23:34:36 compute-0 sudo[140417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:36 compute-0 ceph-mon[74318]: pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:37 compute-0 sudo[140420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:34:37 compute-0 sudo[140420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:37 compute-0 sudo[140420]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:37 compute-0 python3.9[140419]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038475.9781675-566-133716964886050/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:37 compute-0 sudo[140417]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:37 compute-0 sudo[140445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:34:37 compute-0 sudo[140445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:37 compute-0 sudo[140445]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:34:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:37.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:37 compute-0 sudo[140522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:34:37 compute-0 sudo[140522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:37 compute-0 sudo[140522]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:37 compute-0 sudo[140572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:34:37 compute-0 sudo[140572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:37 compute-0 sudo[140572]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:37 compute-0 sudo[140597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:34:37 compute-0 sudo[140597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:37 compute-0 sudo[140597]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:37 compute-0 sudo[140644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 21 23:34:37 compute-0 sudo[140644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:37 compute-0 sudo[140720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swhjhejqbfyadcubhzoxjxipkympfika ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038477.4879313-611-75095789104284/AnsiballZ_stat.py'
Jan 21 23:34:37 compute-0 sudo[140720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:38 compute-0 sudo[140644]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:34:38 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:34:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:34:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:34:38 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:34:38 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:34:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:34:38 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:34:38 compute-0 python3.9[140722]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:34:38 compute-0 sudo[140744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:34:38 compute-0 sudo[140744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:38 compute-0 sudo[140744]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:38 compute-0 sudo[140720]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:38 compute-0 sudo[140771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:34:38 compute-0 sudo[140771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:38 compute-0 sudo[140771]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:38 compute-0 sudo[140843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:34:38 compute-0 sudo[140843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:38 compute-0 sudo[140843]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:34:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:38.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:34:38 compute-0 sudo[140891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:34:38 compute-0 sudo[140891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:38 compute-0 sudo[140966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsfchzodkbwkjdkdsjseghklmyzfgqzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038477.4879313-611-75095789104284/AnsiballZ_copy.py'
Jan 21 23:34:38 compute-0 sudo[140966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:38 compute-0 python3.9[140970]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038477.4879313-611-75095789104284/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:38 compute-0 sudo[140966]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:39 compute-0 sudo[140891]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:34:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:34:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:34:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:34:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:34:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 07acfd3c-993a-4d2c-849a-c16f9a76131b does not exist
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 157fe1e9-1592-49a2-8c3f-21c85e5143df does not exist
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev fcbd9f1e-8f86-43e5-b6cf-598c90485ea5 does not exist
Jan 21 23:34:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:34:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:34:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:34:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:34:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:34:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:34:39 compute-0 ceph-mon[74318]: pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:34:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:34:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:34:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:34:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:34:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:34:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:34:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:34:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:34:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:34:39 compute-0 sudo[141059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:34:39 compute-0 sudo[141059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:39 compute-0 sudo[141059]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:34:39
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'vms', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', '.rgw.root', 'default.rgw.log', 'volumes']
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:34:39 compute-0 sudo[141113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:34:39 compute-0 sudo[141113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:34:39 compute-0 sudo[141113]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:34:39 compute-0 sudo[141154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:34:39 compute-0 sudo[141154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:39 compute-0 sudo[141154]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:34:39 compute-0 sudo[141239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwksxtrdxdxlrtoonikanwbfuznelnus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038479.0666173-656-60255838496789/AnsiballZ_file.py'
Jan 21 23:34:39 compute-0 sudo[141239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:39 compute-0 sudo[141209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:34:39 compute-0 sudo[141209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:34:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:39.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:34:39 compute-0 python3.9[141250]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:39 compute-0 sudo[141239]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:39 compute-0 podman[141316]: 2026-01-21 23:34:39.742126518 +0000 UTC m=+0.062423666 container create e9190fbfca1e6c1434b1852b6eac96629b2c27c467933d8b5924973710144e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_herschel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:34:39 compute-0 systemd[1]: Started libpod-conmon-e9190fbfca1e6c1434b1852b6eac96629b2c27c467933d8b5924973710144e0e.scope.
Jan 21 23:34:39 compute-0 podman[141316]: 2026-01-21 23:34:39.715302989 +0000 UTC m=+0.035600187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:34:39 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:34:39 compute-0 podman[141316]: 2026-01-21 23:34:39.856622157 +0000 UTC m=+0.176919385 container init e9190fbfca1e6c1434b1852b6eac96629b2c27c467933d8b5924973710144e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_herschel, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 21 23:34:39 compute-0 podman[141316]: 2026-01-21 23:34:39.872408445 +0000 UTC m=+0.192705573 container start e9190fbfca1e6c1434b1852b6eac96629b2c27c467933d8b5924973710144e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_herschel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 23:34:39 compute-0 podman[141316]: 2026-01-21 23:34:39.876419975 +0000 UTC m=+0.196717133 container attach e9190fbfca1e6c1434b1852b6eac96629b2c27c467933d8b5924973710144e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_herschel, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 23:34:39 compute-0 trusting_herschel[141349]: 167 167
Jan 21 23:34:39 compute-0 systemd[1]: libpod-e9190fbfca1e6c1434b1852b6eac96629b2c27c467933d8b5924973710144e0e.scope: Deactivated successfully.
Jan 21 23:34:39 compute-0 conmon[141349]: conmon e9190fbfca1e6c1434b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9190fbfca1e6c1434b1852b6eac96629b2c27c467933d8b5924973710144e0e.scope/container/memory.events
Jan 21 23:34:39 compute-0 podman[141316]: 2026-01-21 23:34:39.881838199 +0000 UTC m=+0.202135377 container died e9190fbfca1e6c1434b1852b6eac96629b2c27c467933d8b5924973710144e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_herschel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:34:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae0ed4d719abb292f2068ba161b25cc6409f566663ccc5682538a42ecd0f27dc-merged.mount: Deactivated successfully.
Jan 21 23:34:39 compute-0 podman[141316]: 2026-01-21 23:34:39.938943654 +0000 UTC m=+0.259240802 container remove e9190fbfca1e6c1434b1852b6eac96629b2c27c467933d8b5924973710144e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:34:39 compute-0 systemd[1]: libpod-conmon-e9190fbfca1e6c1434b1852b6eac96629b2c27c467933d8b5924973710144e0e.scope: Deactivated successfully.
Jan 21 23:34:40 compute-0 podman[141452]: 2026-01-21 23:34:40.141773142 +0000 UTC m=+0.063093538 container create 6f5dde56efa4478c7eaaf8a321bc68e748cf9a099a6d3f305667bbe72ffa4319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_joliot, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:34:40 compute-0 sudo[141494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwrxtcmlkiuqssaxoaykjcdboerhrmto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038479.8336215-680-270754470040264/AnsiballZ_command.py'
Jan 21 23:34:40 compute-0 sudo[141494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:40 compute-0 systemd[1]: Started libpod-conmon-6f5dde56efa4478c7eaaf8a321bc68e748cf9a099a6d3f305667bbe72ffa4319.scope.
Jan 21 23:34:40 compute-0 podman[141452]: 2026-01-21 23:34:40.119277832 +0000 UTC m=+0.040598218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:34:40 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f455d4039fc793ddae2ade22cd5cf43aa1dc2be7ffb1b7a0570c192d2cb31d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f455d4039fc793ddae2ade22cd5cf43aa1dc2be7ffb1b7a0570c192d2cb31d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f455d4039fc793ddae2ade22cd5cf43aa1dc2be7ffb1b7a0570c192d2cb31d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f455d4039fc793ddae2ade22cd5cf43aa1dc2be7ffb1b7a0570c192d2cb31d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f455d4039fc793ddae2ade22cd5cf43aa1dc2be7ffb1b7a0570c192d2cb31d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:34:40 compute-0 podman[141452]: 2026-01-21 23:34:40.244176285 +0000 UTC m=+0.165496741 container init 6f5dde56efa4478c7eaaf8a321bc68e748cf9a099a6d3f305667bbe72ffa4319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 21 23:34:40 compute-0 podman[141452]: 2026-01-21 23:34:40.258000372 +0000 UTC m=+0.179320758 container start 6f5dde56efa4478c7eaaf8a321bc68e748cf9a099a6d3f305667bbe72ffa4319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_joliot, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:34:40 compute-0 podman[141452]: 2026-01-21 23:34:40.262771176 +0000 UTC m=+0.184091622 container attach 6f5dde56efa4478c7eaaf8a321bc68e748cf9a099a6d3f305667bbe72ffa4319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Jan 21 23:34:40 compute-0 python3.9[141496]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:34:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:40.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:40 compute-0 sudo[141494]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:41 compute-0 interesting_joliot[141499]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:34:41 compute-0 interesting_joliot[141499]: --> relative data size: 1.0
Jan 21 23:34:41 compute-0 interesting_joliot[141499]: --> All data devices are unavailable
Jan 21 23:34:41 compute-0 systemd[1]: libpod-6f5dde56efa4478c7eaaf8a321bc68e748cf9a099a6d3f305667bbe72ffa4319.scope: Deactivated successfully.
Jan 21 23:34:41 compute-0 podman[141452]: 2026-01-21 23:34:41.098768911 +0000 UTC m=+1.020089267 container died 6f5dde56efa4478c7eaaf8a321bc68e748cf9a099a6d3f305667bbe72ffa4319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_joliot, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:34:41 compute-0 ceph-mon[74318]: pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-97f455d4039fc793ddae2ade22cd5cf43aa1dc2be7ffb1b7a0570c192d2cb31d-merged.mount: Deactivated successfully.
Jan 21 23:34:41 compute-0 podman[141452]: 2026-01-21 23:34:41.171211439 +0000 UTC m=+1.092531795 container remove 6f5dde56efa4478c7eaaf8a321bc68e748cf9a099a6d3f305667bbe72ffa4319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 21 23:34:41 compute-0 sudo[141678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzfrlxixkvzgvtaqxjlcfbmabyzvhllx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038480.6398938-704-35669933009765/AnsiballZ_blockinfile.py'
Jan 21 23:34:41 compute-0 sudo[141678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:41 compute-0 systemd[1]: libpod-conmon-6f5dde56efa4478c7eaaf8a321bc68e748cf9a099a6d3f305667bbe72ffa4319.scope: Deactivated successfully.
Jan 21 23:34:41 compute-0 sudo[141209]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:41 compute-0 sudo[141681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:34:41 compute-0 sudo[141681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:41 compute-0 sudo[141681]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:41 compute-0 python3.9[141680]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:41 compute-0 sudo[141706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:34:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:41.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:41 compute-0 sudo[141678]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:41 compute-0 sudo[141706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:41 compute-0 sudo[141706]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:41 compute-0 sudo[141732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:34:41 compute-0 sudo[141732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:41 compute-0 sudo[141732]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:41 compute-0 sudo[141781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:34:41 compute-0 sudo[141781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:41 compute-0 podman[141916]: 2026-01-21 23:34:41.899324944 +0000 UTC m=+0.052065864 container create 82b1dee9633022407051ab634a513bc4733fd81caa2a453bbf0fac6e6a4db3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 21 23:34:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:41 compute-0 systemd[1]: Started libpod-conmon-82b1dee9633022407051ab634a513bc4733fd81caa2a453bbf0fac6e6a4db3e7.scope.
Jan 21 23:34:41 compute-0 podman[141916]: 2026-01-21 23:34:41.875516104 +0000 UTC m=+0.028257124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:34:41 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:34:42 compute-0 podman[141916]: 2026-01-21 23:34:42.026721222 +0000 UTC m=+0.179462182 container init 82b1dee9633022407051ab634a513bc4733fd81caa2a453bbf0fac6e6a4db3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gould, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:34:42 compute-0 podman[141916]: 2026-01-21 23:34:42.035704724 +0000 UTC m=+0.188445654 container start 82b1dee9633022407051ab634a513bc4733fd81caa2a453bbf0fac6e6a4db3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gould, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 23:34:42 compute-0 podman[141916]: 2026-01-21 23:34:42.039520149 +0000 UTC m=+0.192261089 container attach 82b1dee9633022407051ab634a513bc4733fd81caa2a453bbf0fac6e6a4db3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gould, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 21 23:34:42 compute-0 quirky_gould[141961]: 167 167
Jan 21 23:34:42 compute-0 systemd[1]: libpod-82b1dee9633022407051ab634a513bc4733fd81caa2a453bbf0fac6e6a4db3e7.scope: Deactivated successfully.
Jan 21 23:34:42 compute-0 podman[141916]: 2026-01-21 23:34:42.043949223 +0000 UTC m=+0.196690203 container died 82b1dee9633022407051ab634a513bc4733fd81caa2a453bbf0fac6e6a4db3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:34:42 compute-0 sudo[141991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijjjvnskvshskewiaxjlnxjbsqwmfbew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038481.6864378-731-58740832912106/AnsiballZ_command.py'
Jan 21 23:34:42 compute-0 sudo[141991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-65a9c3171073cd997d8f33e52d47778c23dc3e817a138dc80e72c4b3bf206912-merged.mount: Deactivated successfully.
Jan 21 23:34:42 compute-0 podman[141916]: 2026-01-21 23:34:42.091398046 +0000 UTC m=+0.244139016 container remove 82b1dee9633022407051ab634a513bc4733fd81caa2a453bbf0fac6e6a4db3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gould, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:34:42 compute-0 systemd[1]: libpod-conmon-82b1dee9633022407051ab634a513bc4733fd81caa2a453bbf0fac6e6a4db3e7.scope: Deactivated successfully.
Jan 21 23:34:42 compute-0 python3.9[141996]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:34:42 compute-0 podman[142013]: 2026-01-21 23:34:42.28517601 +0000 UTC m=+0.054145107 container create 4fa5349408a7a717689b395536b336864ef7b79e5dc50ed8ad75ba88d68bdaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 23:34:42 compute-0 sudo[141991]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:42 compute-0 systemd[1]: Started libpod-conmon-4fa5349408a7a717689b395536b336864ef7b79e5dc50ed8ad75ba88d68bdaaa.scope.
Jan 21 23:34:42 compute-0 podman[142013]: 2026-01-21 23:34:42.255934566 +0000 UTC m=+0.024903753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:34:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:34:42 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d37892e23acf456a3d0ea232bd6a29254cb123816cc0a67418fc89b4a924a12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d37892e23acf456a3d0ea232bd6a29254cb123816cc0a67418fc89b4a924a12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d37892e23acf456a3d0ea232bd6a29254cb123816cc0a67418fc89b4a924a12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d37892e23acf456a3d0ea232bd6a29254cb123816cc0a67418fc89b4a924a12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:34:42 compute-0 podman[142013]: 2026-01-21 23:34:42.369944311 +0000 UTC m=+0.138913428 container init 4fa5349408a7a717689b395536b336864ef7b79e5dc50ed8ad75ba88d68bdaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wilson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 21 23:34:42 compute-0 podman[142013]: 2026-01-21 23:34:42.37883953 +0000 UTC m=+0.147808637 container start 4fa5349408a7a717689b395536b336864ef7b79e5dc50ed8ad75ba88d68bdaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wilson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:34:42 compute-0 podman[142013]: 2026-01-21 23:34:42.38250911 +0000 UTC m=+0.151478207 container attach 4fa5349408a7a717689b395536b336864ef7b79e5dc50ed8ad75ba88d68bdaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:34:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:34:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:42.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:34:42 compute-0 sudo[142184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opgiprlchtvigqppbgdjurcmjtitcdxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038482.5061307-755-189699067214840/AnsiballZ_stat.py'
Jan 21 23:34:42 compute-0 sudo[142184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:43 compute-0 python3.9[142186]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:34:43 compute-0 sudo[142184]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:43 compute-0 ceph-mon[74318]: pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:43 compute-0 adoring_wilson[142030]: {
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:     "1": [
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:         {
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:             "devices": [
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:                 "/dev/loop3"
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:             ],
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:             "lv_name": "ceph_lv0",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:             "lv_size": "7511998464",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:             "name": "ceph_lv0",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:             "tags": {
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:                 "ceph.cluster_name": "ceph",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:                 "ceph.crush_device_class": "",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:                 "ceph.encrypted": "0",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:                 "ceph.osd_id": "1",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:                 "ceph.type": "block",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:                 "ceph.vdo": "0"
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:             },
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:             "type": "block",
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:             "vg_name": "ceph_vg0"
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:         }
Jan 21 23:34:43 compute-0 adoring_wilson[142030]:     ]
Jan 21 23:34:43 compute-0 adoring_wilson[142030]: }
Jan 21 23:34:43 compute-0 systemd[1]: libpod-4fa5349408a7a717689b395536b336864ef7b79e5dc50ed8ad75ba88d68bdaaa.scope: Deactivated successfully.
Jan 21 23:34:43 compute-0 podman[142013]: 2026-01-21 23:34:43.173200606 +0000 UTC m=+0.942169713 container died 4fa5349408a7a717689b395536b336864ef7b79e5dc50ed8ad75ba88d68bdaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 23:34:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d37892e23acf456a3d0ea232bd6a29254cb123816cc0a67418fc89b4a924a12-merged.mount: Deactivated successfully.
Jan 21 23:34:43 compute-0 podman[142013]: 2026-01-21 23:34:43.229468425 +0000 UTC m=+0.998437522 container remove 4fa5349408a7a717689b395536b336864ef7b79e5dc50ed8ad75ba88d68bdaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wilson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:34:43 compute-0 systemd[1]: libpod-conmon-4fa5349408a7a717689b395536b336864ef7b79e5dc50ed8ad75ba88d68bdaaa.scope: Deactivated successfully.
Jan 21 23:34:43 compute-0 sudo[141781]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:43 compute-0 sudo[142231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:34:43 compute-0 sudo[142231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:43 compute-0 sudo[142231]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:34:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:43.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:34:43 compute-0 sudo[142280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:34:43 compute-0 sudo[142280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:43 compute-0 sudo[142280]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:43 compute-0 sudo[142328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:34:43 compute-0 sudo[142328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:43 compute-0 sudo[142328]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:43 compute-0 sudo[142371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:34:43 compute-0 sudo[142371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:43 compute-0 sudo[142471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsgjhgivjsnmzlpnjoaouilbvqzqxlea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038483.334222-779-269316519099859/AnsiballZ_command.py'
Jan 21 23:34:43 compute-0 sudo[142471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:43 compute-0 podman[142498]: 2026-01-21 23:34:43.817318643 +0000 UTC m=+0.053454876 container create a1998e1a4a690e73eb0e1055de0cea8178ab4582b1c2024039e5d9d90ef2d3a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hoover, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 21 23:34:43 compute-0 systemd[1]: Started libpod-conmon-a1998e1a4a690e73eb0e1055de0cea8178ab4582b1c2024039e5d9d90ef2d3a3.scope.
Jan 21 23:34:43 compute-0 python3.9[142483]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:34:43 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:34:43 compute-0 podman[142498]: 2026-01-21 23:34:43.789162643 +0000 UTC m=+0.025298866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:34:43 compute-0 podman[142498]: 2026-01-21 23:34:43.895521776 +0000 UTC m=+0.131658019 container init a1998e1a4a690e73eb0e1055de0cea8178ab4582b1c2024039e5d9d90ef2d3a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 23:34:43 compute-0 podman[142498]: 2026-01-21 23:34:43.903210118 +0000 UTC m=+0.139346331 container start a1998e1a4a690e73eb0e1055de0cea8178ab4582b1c2024039e5d9d90ef2d3a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hoover, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:34:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:43 compute-0 podman[142498]: 2026-01-21 23:34:43.906490077 +0000 UTC m=+0.142626270 container attach a1998e1a4a690e73eb0e1055de0cea8178ab4582b1c2024039e5d9d90ef2d3a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hoover, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:34:43 compute-0 elated_hoover[142512]: 167 167
Jan 21 23:34:43 compute-0 systemd[1]: libpod-a1998e1a4a690e73eb0e1055de0cea8178ab4582b1c2024039e5d9d90ef2d3a3.scope: Deactivated successfully.
Jan 21 23:34:43 compute-0 podman[142498]: 2026-01-21 23:34:43.911071255 +0000 UTC m=+0.147207448 container died a1998e1a4a690e73eb0e1055de0cea8178ab4582b1c2024039e5d9d90ef2d3a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hoover, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 21 23:34:43 compute-0 sudo[142471]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-872660911a9d4da41684590d49bb68d69a6a53c7227875a7a2253781ac3def41-merged.mount: Deactivated successfully.
Jan 21 23:34:43 compute-0 podman[142498]: 2026-01-21 23:34:43.947233597 +0000 UTC m=+0.183369810 container remove a1998e1a4a690e73eb0e1055de0cea8178ab4582b1c2024039e5d9d90ef2d3a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:34:43 compute-0 systemd[1]: libpod-conmon-a1998e1a4a690e73eb0e1055de0cea8178ab4582b1c2024039e5d9d90ef2d3a3.scope: Deactivated successfully.
Jan 21 23:34:44 compute-0 podman[142565]: 2026-01-21 23:34:44.138049462 +0000 UTC m=+0.063220901 container create 5af1678022ea251c31e2a14afa9e769a02b07625a7b669367c5da7b36626e6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 23:34:44 compute-0 systemd[1]: Started libpod-conmon-5af1678022ea251c31e2a14afa9e769a02b07625a7b669367c5da7b36626e6a9.scope.
Jan 21 23:34:44 compute-0 podman[142565]: 2026-01-21 23:34:44.1128255 +0000 UTC m=+0.037996999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:34:44 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1669cd431ecc6a6d60f7cb489cadcc30f349dd28465061f82c3cf636d17f045d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1669cd431ecc6a6d60f7cb489cadcc30f349dd28465061f82c3cf636d17f045d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1669cd431ecc6a6d60f7cb489cadcc30f349dd28465061f82c3cf636d17f045d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1669cd431ecc6a6d60f7cb489cadcc30f349dd28465061f82c3cf636d17f045d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:34:44 compute-0 podman[142565]: 2026-01-21 23:34:44.240215678 +0000 UTC m=+0.165387147 container init 5af1678022ea251c31e2a14afa9e769a02b07625a7b669367c5da7b36626e6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:34:44 compute-0 podman[142565]: 2026-01-21 23:34:44.245743925 +0000 UTC m=+0.170915404 container start 5af1678022ea251c31e2a14afa9e769a02b07625a7b669367c5da7b36626e6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 23:34:44 compute-0 podman[142565]: 2026-01-21 23:34:44.250197719 +0000 UTC m=+0.175369188 container attach 5af1678022ea251c31e2a14afa9e769a02b07625a7b669367c5da7b36626e6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 23:34:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:44.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:44 compute-0 sudo[142711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebzcybioegfahhajvhcfttfpmxfpvmwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038484.1980066-803-7297291014098/AnsiballZ_file.py'
Jan 21 23:34:44 compute-0 sudo[142711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:44 compute-0 python3.9[142713]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:44 compute-0 sudo[142711]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:45 compute-0 friendly_jemison[142604]: {
Jan 21 23:34:45 compute-0 friendly_jemison[142604]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:34:45 compute-0 friendly_jemison[142604]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:34:45 compute-0 friendly_jemison[142604]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:34:45 compute-0 friendly_jemison[142604]:         "osd_id": 1,
Jan 21 23:34:45 compute-0 friendly_jemison[142604]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:34:45 compute-0 friendly_jemison[142604]:         "type": "bluestore"
Jan 21 23:34:45 compute-0 friendly_jemison[142604]:     }
Jan 21 23:34:45 compute-0 friendly_jemison[142604]: }
Jan 21 23:34:45 compute-0 systemd[1]: libpod-5af1678022ea251c31e2a14afa9e769a02b07625a7b669367c5da7b36626e6a9.scope: Deactivated successfully.
Jan 21 23:34:45 compute-0 podman[142565]: 2026-01-21 23:34:45.126899493 +0000 UTC m=+1.052070982 container died 5af1678022ea251c31e2a14afa9e769a02b07625a7b669367c5da7b36626e6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:34:45 compute-0 ceph-mon[74318]: pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1669cd431ecc6a6d60f7cb489cadcc30f349dd28465061f82c3cf636d17f045d-merged.mount: Deactivated successfully.
Jan 21 23:34:45 compute-0 podman[142565]: 2026-01-21 23:34:45.190904326 +0000 UTC m=+1.116075805 container remove 5af1678022ea251c31e2a14afa9e769a02b07625a7b669367c5da7b36626e6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 23:34:45 compute-0 systemd[1]: libpod-conmon-5af1678022ea251c31e2a14afa9e769a02b07625a7b669367c5da7b36626e6a9.scope: Deactivated successfully.
Jan 21 23:34:45 compute-0 sudo[142371]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:34:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:34:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:34:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:34:45 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 9aa56c79-3813-4d01-afc2-7d790360040a does not exist
Jan 21 23:34:45 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev f44c96b4-9905-48a4-a5e7-30057df44397 does not exist
Jan 21 23:34:45 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 831454e5-647e-4568-af78-a4275e5dea53 does not exist
Jan 21 23:34:45 compute-0 sudo[142767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:34:45 compute-0 sudo[142767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:45 compute-0 sudo[142767]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:45.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:45 compute-0 sudo[142793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:34:45 compute-0 sudo[142793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:45 compute-0 sudo[142793]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:45 compute-0 python3.9[142943]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:34:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:34:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:34:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:46.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:47 compute-0 sudo[143094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfafqjjtcohjpszujuokfhhddsrkwfua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038486.9189394-923-135054478373308/AnsiballZ_command.py'
Jan 21 23:34:47 compute-0 sudo[143094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:47 compute-0 ceph-mon[74318]: pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:34:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:47.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:47 compute-0 python3.9[143096]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:34:47 compute-0 ovs-vsctl[143098]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 21 23:34:47 compute-0 sudo[143094]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:48 compute-0 sudo[143248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtrywqjtsguylginfdnesdhfmmzoafas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038487.7924843-950-223673451899448/AnsiballZ_command.py'
Jan 21 23:34:48 compute-0 sudo[143248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:48 compute-0 python3.9[143250]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:34:48 compute-0 sudo[143248]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:34:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:48.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:34:49 compute-0 sudo[143403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hstdjivzdyzutophbjvhvmiugkcfncvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038488.6546357-974-136108845175703/AnsiballZ_command.py'
Jan 21 23:34:49 compute-0 sudo[143403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:49 compute-0 python3.9[143405]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:34:49 compute-0 ovs-vsctl[143406]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 21 23:34:49 compute-0 sudo[143403]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:49 compute-0 ceph-mon[74318]: pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:49.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:49 compute-0 python3.9[143557]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:34:50 compute-0 ceph-mon[74318]: pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:50.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:50 compute-0 sudo[143709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktxpcxzrazdgcfxzroxxdvfeqfsgpttr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038490.2882817-1025-77993973982885/AnsiballZ_file.py'
Jan 21 23:34:50 compute-0 sudo[143709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:50 compute-0 python3.9[143711]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:34:50 compute-0 sudo[143709]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:51.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:51 compute-0 sudo[143862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyeqxycllfmvynjmtapkgfqofazomvig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038491.1394827-1049-109096502931141/AnsiballZ_stat.py'
Jan 21 23:34:51 compute-0 sudo[143862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:51 compute-0 python3.9[143864]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:34:51 compute-0 sudo[143862]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:51 compute-0 sudo[143940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptuvlhlhwhkgaentebmhzszxfgafgehw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038491.1394827-1049-109096502931141/AnsiballZ_file.py'
Jan 21 23:34:51 compute-0 sudo[143940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:52 compute-0 python3.9[143942]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:34:52 compute-0 sudo[143940]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:34:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:34:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:52.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:34:52 compute-0 ceph-mon[74318]: pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:53.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:53 compute-0 sudo[144093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhguevksoaopzkhwyiwwytupnpnftlat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038492.3196547-1049-160033198579036/AnsiballZ_stat.py'
Jan 21 23:34:53 compute-0 sudo[144093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:53 compute-0 python3.9[144095]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:34:53 compute-0 sudo[144093]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:34:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:34:54 compute-0 sudo[144171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnxbvnlbtotjbjmdezrydrpwnxkkxbgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038492.3196547-1049-160033198579036/AnsiballZ_file.py'
Jan 21 23:34:54 compute-0 sudo[144171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:54 compute-0 python3.9[144173]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:34:54 compute-0 sudo[144171]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:54.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:54 compute-0 sudo[144323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-newxdxnfytlewguhpizmsaszgzqakptq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038494.626062-1118-280919575613688/AnsiballZ_file.py'
Jan 21 23:34:54 compute-0 sudo[144323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:55 compute-0 ceph-mon[74318]: pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:55 compute-0 python3.9[144325]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:55 compute-0 sudo[144323]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:55.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:55 compute-0 sudo[144476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swwwkgypbarnokickkyskeakmuumdean ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038495.4120915-1142-181178398509801/AnsiballZ_stat.py'
Jan 21 23:34:55 compute-0 sudo[144476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:56 compute-0 python3.9[144478]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:34:56 compute-0 sudo[144476]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:34:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:56.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:34:56 compute-0 sudo[144554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmslhskypcdbgqlymgakivloiswjdtwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038495.4120915-1142-181178398509801/AnsiballZ_file.py'
Jan 21 23:34:56 compute-0 sudo[144554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:56 compute-0 python3.9[144556]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:56 compute-0 sudo[144554]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:57 compute-0 ceph-mon[74318]: pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:34:57 compute-0 sudo[144656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:34:57 compute-0 sudo[144656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:57 compute-0 sudo[144656]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:57.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:57 compute-0 sudo[144753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbcotvnvxhainjuucbkhooafuiwynumb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038497.0849316-1178-184736556843883/AnsiballZ_stat.py'
Jan 21 23:34:57 compute-0 sudo[144753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:57 compute-0 sudo[144709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:34:57 compute-0 sudo[144709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:34:57 compute-0 sudo[144709]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:57 compute-0 python3.9[144758]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:34:57 compute-0 sudo[144753]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:57 compute-0 sudo[144835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxtrwxqoynvvkehdhlheifmffevykoku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038497.0849316-1178-184736556843883/AnsiballZ_file.py'
Jan 21 23:34:57 compute-0 sudo[144835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:58 compute-0 python3.9[144837]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:34:58 compute-0 sudo[144835]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:34:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:34:58.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:34:58 compute-0 sudo[144987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuuetqdjemrnpyccrcktwxzlxvepwqmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038498.3884516-1214-167742898255260/AnsiballZ_systemd.py'
Jan 21 23:34:58 compute-0 sudo[144987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:34:59 compute-0 ceph-mon[74318]: pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:34:59 compute-0 python3.9[144989]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:34:59 compute-0 systemd[1]: Reloading.
Jan 21 23:34:59 compute-0 systemd-rc-local-generator[145013]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:34:59 compute-0 systemd-sysv-generator[145020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:34:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:34:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:34:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:34:59.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:34:59 compute-0 sudo[144987]: pam_unix(sudo:session): session closed for user root
Jan 21 23:34:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:35:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:00.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:35:01 compute-0 ceph-mon[74318]: pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:01 compute-0 sudo[145177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymtsyknsamjdqshkisofndraudmneffv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038500.7416768-1238-125543970255490/AnsiballZ_stat.py'
Jan 21 23:35:01 compute-0 sudo[145177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:01 compute-0 python3.9[145179]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:35:01 compute-0 sudo[145177]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:01.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:01 compute-0 sudo[145256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhrthuglhvogopfphoukzeunwmgnmfss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038500.7416768-1238-125543970255490/AnsiballZ_file.py'
Jan 21 23:35:01 compute-0 sudo[145256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:01 compute-0 python3.9[145258]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:35:01 compute-0 sudo[145256]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:35:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:35:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:02.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:35:02 compute-0 sudo[145408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swmsjvavqcsrxhmttpzfnxayjqjumero ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038502.1783326-1274-140272707896069/AnsiballZ_stat.py'
Jan 21 23:35:02 compute-0 sudo[145408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:02 compute-0 python3.9[145410]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:35:02 compute-0 sudo[145408]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:03 compute-0 ceph-mon[74318]: pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:03 compute-0 sudo[145486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyluoyswxwaremilmpblsnwczuxbbhav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038502.1783326-1274-140272707896069/AnsiballZ_file.py'
Jan 21 23:35:03 compute-0 sudo[145486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:03 compute-0 python3.9[145488]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:35:03 compute-0 sudo[145486]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:35:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:03.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:35:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:03 compute-0 sudo[145639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbszwbuilosqensmaizyhnyxlxkelbqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038503.585088-1310-128478858088347/AnsiballZ_systemd.py'
Jan 21 23:35:03 compute-0 sudo[145639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:04 compute-0 python3.9[145641]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:35:04 compute-0 systemd[1]: Reloading.
Jan 21 23:35:04 compute-0 systemd-sysv-generator[145667]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:35:04 compute-0 systemd-rc-local-generator[145664]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:35:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:04.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:04 compute-0 systemd[1]: Starting Create netns directory...
Jan 21 23:35:04 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 21 23:35:04 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 21 23:35:04 compute-0 systemd[1]: Finished Create netns directory.
Jan 21 23:35:04 compute-0 sudo[145639]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:05 compute-0 ceph-mon[74318]: pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:35:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:05.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:35:05 compute-0 sudo[145833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpwszsghskgaaxzzpcphuakyctjxsnfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038505.1469276-1340-237952987253080/AnsiballZ_file.py'
Jan 21 23:35:05 compute-0 sudo[145833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:05 compute-0 python3.9[145835]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:35:05 compute-0 sudo[145833]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:06 compute-0 sudo[145985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdedrdttirvarlvxysvonmijzabsigvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038506.096474-1364-115568434393314/AnsiballZ_stat.py'
Jan 21 23:35:06 compute-0 sudo[145985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:35:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:06.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:35:06 compute-0 python3.9[145987]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:35:06 compute-0 sudo[145985]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:07 compute-0 sudo[146108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmjimfmcymnhojbikfztaibqydkxmlpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038506.096474-1364-115568434393314/AnsiballZ_copy.py'
Jan 21 23:35:07 compute-0 sudo[146108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:07 compute-0 ceph-mon[74318]: pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:07 compute-0 python3.9[146110]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769038506.096474-1364-115568434393314/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:35:07 compute-0 sudo[146108]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:35:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:35:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:07.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:35:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:08 compute-0 sudo[146261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfkmfrvjufdktpanaaaexlvdrushcazr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038507.7956965-1415-48644009378276/AnsiballZ_file.py'
Jan 21 23:35:08 compute-0 sudo[146261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:08 compute-0 python3.9[146263]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:35:08 compute-0 sudo[146261]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:08.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:09 compute-0 sudo[146413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gibaazvttixkdyaendyxktdttvyedyeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038508.633608-1439-14176137881401/AnsiballZ_file.py'
Jan 21 23:35:09 compute-0 sudo[146413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:09 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 23:35:09 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 6023 writes, 25K keys, 6023 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6023 writes, 990 syncs, 6.08 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6023 writes, 25K keys, 6023 commit groups, 1.0 writes per commit group, ingest: 19.26 MB, 0.03 MB/s
                                           Interval WAL: 6023 writes, 990 syncs, 6.08 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 23:35:09 compute-0 python3.9[146415]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:35:09 compute-0 sudo[146413]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:35:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:35:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:35:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:35:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:35:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:35:09 compute-0 ceph-mon[74318]: pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:09.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:09 compute-0 sudo[146566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggcsdrijwgpndildaomvvxirmsvxytld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038509.457639-1463-91108987929824/AnsiballZ_stat.py'
Jan 21 23:35:09 compute-0 sudo[146566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:10 compute-0 python3.9[146568]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:35:10 compute-0 sudo[146566]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:35:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:10.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:35:10 compute-0 sudo[146689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkwllogakhvmpszzxbyyvkrxqgmnxcze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038509.457639-1463-91108987929824/AnsiballZ_copy.py'
Jan 21 23:35:10 compute-0 sudo[146689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:10 compute-0 python3.9[146691]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769038509.457639-1463-91108987929824/.source.json _original_basename=.z75untr8 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:35:10 compute-0 sudo[146689]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:11 compute-0 ceph-mon[74318]: pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:11.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:11 compute-0 python3.9[146841]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:35:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:35:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:12.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:13 compute-0 ceph-mon[74318]: pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:13.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:13 compute-0 sudo[147264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilgtorryemibemitvqttyvdssblrlixp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038513.3322504-1583-173366036337388/AnsiballZ_container_config_data.py'
Jan 21 23:35:13 compute-0 sudo[147264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:14 compute-0 python3.9[147266]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 21 23:35:14 compute-0 sudo[147264]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:35:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:14.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:35:15 compute-0 ceph-mon[74318]: pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:15.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:15 compute-0 sudo[147417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssmuijfsvrsobzoluzrdrvfrlwldeuke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038514.7166507-1616-157692476821923/AnsiballZ_container_config_hash.py'
Jan 21 23:35:15 compute-0 sudo[147417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:15 compute-0 python3.9[147419]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 21 23:35:15 compute-0 sudo[147417]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:16 compute-0 ceph-mon[74318]: pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:35:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:16.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:35:16 compute-0 sudo[147569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aogqpucvjzanfwisyaqkbuycmfnyzgon ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769038516.077378-1646-188370533242549/AnsiballZ_edpm_container_manage.py'
Jan 21 23:35:16 compute-0 sudo[147569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:16 compute-0 python3[147571]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 21 23:35:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:35:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:17.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:17 compute-0 sudo[147600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:35:17 compute-0 sudo[147600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:17 compute-0 sudo[147600]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:17 compute-0 sudo[147625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:35:17 compute-0 sudo[147625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:17 compute-0 sudo[147625]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:18.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:19 compute-0 ceph-mon[74318]: pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:19.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:19 compute-0 ceph-mgr[74614]: [devicehealth INFO root] Check health
Jan 21 23:35:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:35:20.069375) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038520069405, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1553, "num_deletes": 251, "total_data_size": 2882532, "memory_usage": 2930960, "flush_reason": "Manual Compaction"}
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038520090793, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2830235, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10782, "largest_seqno": 12334, "table_properties": {"data_size": 2823011, "index_size": 4295, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14168, "raw_average_key_size": 19, "raw_value_size": 2808724, "raw_average_value_size": 3852, "num_data_blocks": 192, "num_entries": 729, "num_filter_entries": 729, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769038358, "oldest_key_time": 1769038358, "file_creation_time": 1769038520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 21494 microseconds, and 7816 cpu microseconds.
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:35:20.090863) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2830235 bytes OK
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:35:20.090888) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:35:20.093310) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:35:20.093331) EVENT_LOG_v1 {"time_micros": 1769038520093325, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:35:20.093349) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2876066, prev total WAL file size 2876066, number of live WAL files 2.
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:35:20.094062) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2763KB)], [26(7585KB)]
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038520094148, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10598276, "oldest_snapshot_seqno": -1}
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4015 keys, 8425459 bytes, temperature: kUnknown
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038520151498, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8425459, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8395766, "index_size": 18575, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97459, "raw_average_key_size": 24, "raw_value_size": 8320446, "raw_average_value_size": 2072, "num_data_blocks": 800, "num_entries": 4015, "num_filter_entries": 4015, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769038520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:35:20.151814) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8425459 bytes
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:35:20.153485) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.4 rd, 146.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(6.7) write-amplify(3.0) OK, records in: 4534, records dropped: 519 output_compression: NoCompression
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:35:20.153510) EVENT_LOG_v1 {"time_micros": 1769038520153498, "job": 10, "event": "compaction_finished", "compaction_time_micros": 57468, "compaction_time_cpu_micros": 18947, "output_level": 6, "num_output_files": 1, "total_output_size": 8425459, "num_input_records": 4534, "num_output_records": 4015, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038520154155, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038520155759, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:35:20.093982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:35:20.155832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:35:20.155837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:35:20.155839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:35:20.155840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:35:20 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:35:20.155842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:35:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:20.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:21 compute-0 ceph-mon[74318]: pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:35:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:21.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:35:21 compute-0 podman[147586]: 2026-01-21 23:35:21.790863622 +0000 UTC m=+4.765077938 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 21 23:35:21 compute-0 podman[147760]: 2026-01-21 23:35:21.912696593 +0000 UTC m=+0.045076343 container create 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 21 23:35:21 compute-0 podman[147760]: 2026-01-21 23:35:21.889190733 +0000 UTC m=+0.021570493 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 21 23:35:21 compute-0 python3[147571]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 21 23:35:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:22 compute-0 sudo[147569]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:35:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:22.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:23 compute-0 sudo[147948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohizisekenpslerydmhlgcnzzaeqiwzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038522.9425592-1670-271426675386760/AnsiballZ_stat.py'
Jan 21 23:35:23 compute-0 sudo[147948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:35:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:23.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:35:23 compute-0 ceph-mon[74318]: pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:23 compute-0 python3.9[147950]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:35:23 compute-0 sudo[147948]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:24 compute-0 sudo[148103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jygwvtbzfpuwobxmtidjxlfwwbzvgooz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038523.8850465-1697-210422354827693/AnsiballZ_file.py'
Jan 21 23:35:24 compute-0 sudo[148103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:24 compute-0 ceph-mon[74318]: pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:24 compute-0 python3.9[148105]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:35:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:35:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:24.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:35:24 compute-0 sudo[148103]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:24 compute-0 sudo[148179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwcfcdgdepbljdcgoanujnqtzfmcvbga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038523.8850465-1697-210422354827693/AnsiballZ_stat.py'
Jan 21 23:35:24 compute-0 sudo[148179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:25 compute-0 python3.9[148181]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:35:25 compute-0 sudo[148179]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:35:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:25.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:35:25 compute-0 sudo[148331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzobddysfubqfatgvjjzkjzsivyfrnol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038525.1264844-1697-192861181230411/AnsiballZ_copy.py'
Jan 21 23:35:25 compute-0 sudo[148331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:25 compute-0 python3.9[148333]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769038525.1264844-1697-192861181230411/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:35:25 compute-0 sudo[148331]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:26 compute-0 sudo[148407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kptpoqdcptyheaupsljsttsyygjhnycs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038525.1264844-1697-192861181230411/AnsiballZ_systemd.py'
Jan 21 23:35:26 compute-0 sudo[148407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:26 compute-0 python3.9[148409]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 23:35:26 compute-0 systemd[1]: Reloading.
Jan 21 23:35:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:35:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:26.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:35:26 compute-0 systemd-sysv-generator[148433]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:35:26 compute-0 systemd-rc-local-generator[148429]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:35:26 compute-0 sudo[148407]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:27 compute-0 sudo[148517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpdtbyboivddjfniqbkysrjgoibwgvvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038525.1264844-1697-192861181230411/AnsiballZ_systemd.py'
Jan 21 23:35:27 compute-0 sudo[148517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:27 compute-0 ceph-mon[74318]: pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:35:27 compute-0 python3.9[148519]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:35:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:27.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:27 compute-0 systemd[1]: Reloading.
Jan 21 23:35:27 compute-0 systemd-rc-local-generator[148549]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:35:27 compute-0 systemd-sysv-generator[148554]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:35:27 compute-0 systemd[1]: Starting ovn_controller container...
Jan 21 23:35:27 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9cebbe7c34fa5ec9d0a9447b43d498ae812a1333e4e077fd4c300f6ce206ee8/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 21 23:35:27 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c.
Jan 21 23:35:27 compute-0 podman[148561]: 2026-01-21 23:35:27.905098791 +0000 UTC m=+0.147131298 container init 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 21 23:35:27 compute-0 ovn_controller[148575]: + sudo -E kolla_set_configs
Jan 21 23:35:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:27 compute-0 podman[148561]: 2026-01-21 23:35:27.954476182 +0000 UTC m=+0.196508629 container start 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 21 23:35:27 compute-0 edpm-start-podman-container[148561]: ovn_controller
Jan 21 23:35:28 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 21 23:35:28 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 21 23:35:28 compute-0 edpm-start-podman-container[148560]: Creating additional drop-in dependency for "ovn_controller" (125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c)
Jan 21 23:35:28 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 21 23:35:28 compute-0 podman[148582]: 2026-01-21 23:35:28.303684776 +0000 UTC m=+0.333791688 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 21 23:35:28 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 21 23:35:28 compute-0 systemd[1]: Reloading.
Jan 21 23:35:28 compute-0 systemd-sysv-generator[148655]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:35:28 compute-0 systemd-rc-local-generator[148652]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:35:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:28.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:28 compute-0 systemd[1]: 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c-557e33d2ddea82d1.service: Main process exited, code=exited, status=1/FAILURE
Jan 21 23:35:28 compute-0 systemd[1]: 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c-557e33d2ddea82d1.service: Failed with result 'exit-code'.
Jan 21 23:35:28 compute-0 systemd[1]: Started ovn_controller container.
Jan 21 23:35:28 compute-0 systemd[148622]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 21 23:35:28 compute-0 sudo[148517]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:28 compute-0 systemd[148622]: Queued start job for default target Main User Target.
Jan 21 23:35:28 compute-0 systemd[148622]: Created slice User Application Slice.
Jan 21 23:35:28 compute-0 systemd[148622]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 21 23:35:28 compute-0 systemd[148622]: Started Daily Cleanup of User's Temporary Directories.
Jan 21 23:35:28 compute-0 systemd[148622]: Reached target Paths.
Jan 21 23:35:28 compute-0 systemd[148622]: Reached target Timers.
Jan 21 23:35:28 compute-0 systemd[148622]: Starting D-Bus User Message Bus Socket...
Jan 21 23:35:28 compute-0 systemd[148622]: Starting Create User's Volatile Files and Directories...
Jan 21 23:35:28 compute-0 systemd[148622]: Listening on D-Bus User Message Bus Socket.
Jan 21 23:35:28 compute-0 systemd[148622]: Finished Create User's Volatile Files and Directories.
Jan 21 23:35:28 compute-0 systemd[148622]: Reached target Sockets.
Jan 21 23:35:28 compute-0 systemd[148622]: Reached target Basic System.
Jan 21 23:35:28 compute-0 systemd[148622]: Reached target Main User Target.
Jan 21 23:35:28 compute-0 systemd[148622]: Startup finished in 157ms.
Jan 21 23:35:28 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 21 23:35:28 compute-0 systemd[1]: Started Session c1 of User root.
Jan 21 23:35:28 compute-0 ovn_controller[148575]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 21 23:35:28 compute-0 ovn_controller[148575]: INFO:__main__:Validating config file
Jan 21 23:35:28 compute-0 ovn_controller[148575]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 21 23:35:28 compute-0 ovn_controller[148575]: INFO:__main__:Writing out command to execute
Jan 21 23:35:28 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 21 23:35:28 compute-0 ovn_controller[148575]: ++ cat /run_command
Jan 21 23:35:28 compute-0 ovn_controller[148575]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 21 23:35:28 compute-0 ovn_controller[148575]: + ARGS=
Jan 21 23:35:28 compute-0 ovn_controller[148575]: + sudo kolla_copy_cacerts
Jan 21 23:35:28 compute-0 systemd[1]: Started Session c2 of User root.
Jan 21 23:35:28 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 21 23:35:28 compute-0 ovn_controller[148575]: + [[ ! -n '' ]]
Jan 21 23:35:28 compute-0 ovn_controller[148575]: + . kolla_extend_start
Jan 21 23:35:28 compute-0 ovn_controller[148575]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 21 23:35:28 compute-0 ovn_controller[148575]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 21 23:35:28 compute-0 ovn_controller[148575]: + umask 0022
Jan 21 23:35:28 compute-0 ovn_controller[148575]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 21 23:35:29 compute-0 NetworkManager[48940]: <info>  [1769038529.0245] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 21 23:35:29 compute-0 NetworkManager[48940]: <info>  [1769038529.0256] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 23:35:29 compute-0 NetworkManager[48940]: <warn>  [1769038529.0259] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 23:35:29 compute-0 NetworkManager[48940]: <info>  [1769038529.0266] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 21 23:35:29 compute-0 NetworkManager[48940]: <info>  [1769038529.0271] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 21 23:35:29 compute-0 NetworkManager[48940]: <info>  [1769038529.0274] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 21 23:35:29 compute-0 kernel: br-int: entered promiscuous mode
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 21 23:35:29 compute-0 ovn_controller[148575]: 2026-01-21T23:35:29Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 21 23:35:29 compute-0 NetworkManager[48940]: <info>  [1769038529.0508] manager: (ovn-4d1543-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 21 23:35:29 compute-0 NetworkManager[48940]: <info>  [1769038529.0515] manager: (ovn-18ac42-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Jan 21 23:35:29 compute-0 NetworkManager[48940]: <info>  [1769038529.0521] manager: (ovn-d3d811-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 21 23:35:29 compute-0 systemd-udevd[148739]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 23:35:29 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 21 23:35:29 compute-0 systemd-udevd[148746]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 23:35:29 compute-0 NetworkManager[48940]: <info>  [1769038529.1380] device (genev_sys_6081): carrier: link connected
Jan 21 23:35:29 compute-0 NetworkManager[48940]: <info>  [1769038529.1384] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Jan 21 23:35:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:29.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:29 compute-0 ceph-mon[74318]: pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:30 compute-0 python3.9[148844]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 21 23:35:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:35:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:30.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:35:30 compute-0 ceph-mon[74318]: pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:31.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:31 compute-0 sudo[148995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwnywhykantkdmcdsmzfnllxgsuxxgsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038531.2614202-1832-118886922576734/AnsiballZ_stat.py'
Jan 21 23:35:31 compute-0 sudo[148995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:31 compute-0 python3.9[148997]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:35:31 compute-0 sudo[148995]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:32 compute-0 sudo[149118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhatvuvorhrdxhtixdwivnuiydqbnoxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038531.2614202-1832-118886922576734/AnsiballZ_copy.py'
Jan 21 23:35:32 compute-0 sudo[149118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:35:32 compute-0 python3.9[149120]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769038531.2614202-1832-118886922576734/.source.yaml _original_basename=.1tnk7kd9 follow=False checksum=44d9d675fbb9d735209cc6da254ef4dcd33ae941 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:35:32 compute-0 sudo[149118]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:35:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:32.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:35:32 compute-0 ceph-mon[74318]: pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:33 compute-0 sudo[149270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcljnbfanodnvivdagoehecyqhkxhyye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038532.7412744-1877-63238261540402/AnsiballZ_command.py'
Jan 21 23:35:33 compute-0 sudo[149270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:33 compute-0 python3.9[149272]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:35:33 compute-0 ovs-vsctl[149273]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 21 23:35:33 compute-0 sudo[149270]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:35:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:33.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:35:33 compute-0 sudo[149424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfjuwumcpbtxhrbrdvfqusmzkpfsnhmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038533.5164192-1901-244649467851160/AnsiballZ_command.py'
Jan 21 23:35:33 compute-0 sudo[149424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:34 compute-0 python3.9[149426]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:35:34 compute-0 ovs-vsctl[149428]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 21 23:35:34 compute-0 sudo[149424]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:35:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:34.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:35:34 compute-0 ceph-mon[74318]: pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:35 compute-0 sudo[149579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elajyigdgdvvyqvgwpsfsiyjughzkcze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038534.8552308-1943-152380098545471/AnsiballZ_command.py'
Jan 21 23:35:35 compute-0 sudo[149579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:35 compute-0 python3.9[149581]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:35:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:35:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:35.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:35:35 compute-0 ovs-vsctl[149583]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 21 23:35:35 compute-0 sudo[149579]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:36 compute-0 sshd-session[136794]: Connection closed by 192.168.122.30 port 33432
Jan 21 23:35:36 compute-0 sshd-session[136791]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:35:36 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Jan 21 23:35:36 compute-0 systemd[1]: session-46.scope: Consumed 1min 5.023s CPU time.
Jan 21 23:35:36 compute-0 systemd-logind[786]: Session 46 logged out. Waiting for processes to exit.
Jan 21 23:35:36 compute-0 systemd-logind[786]: Removed session 46.
Jan 21 23:35:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:36.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:37 compute-0 ceph-mon[74318]: pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:35:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:37.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:37 compute-0 sudo[149609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:35:37 compute-0 sudo[149609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:37 compute-0 sudo[149609]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:37 compute-0 sudo[149634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:35:37 compute-0 sudo[149634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:37 compute-0 sudo[149634]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:38.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:39 compute-0 ceph-mon[74318]: pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:39 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 21 23:35:39 compute-0 systemd[148622]: Activating special unit Exit the Session...
Jan 21 23:35:39 compute-0 systemd[148622]: Stopped target Main User Target.
Jan 21 23:35:39 compute-0 systemd[148622]: Stopped target Basic System.
Jan 21 23:35:39 compute-0 systemd[148622]: Stopped target Paths.
Jan 21 23:35:39 compute-0 systemd[148622]: Stopped target Sockets.
Jan 21 23:35:39 compute-0 systemd[148622]: Stopped target Timers.
Jan 21 23:35:39 compute-0 systemd[148622]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 21 23:35:39 compute-0 systemd[148622]: Closed D-Bus User Message Bus Socket.
Jan 21 23:35:39 compute-0 systemd[148622]: Stopped Create User's Volatile Files and Directories.
Jan 21 23:35:39 compute-0 systemd[148622]: Removed slice User Application Slice.
Jan 21 23:35:39 compute-0 systemd[148622]: Reached target Shutdown.
Jan 21 23:35:39 compute-0 systemd[148622]: Finished Exit the Session.
Jan 21 23:35:39 compute-0 systemd[148622]: Reached target Exit the Session.
Jan 21 23:35:39 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 21 23:35:39 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 21 23:35:39 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 21 23:35:39 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 21 23:35:39 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 21 23:35:39 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 21 23:35:39 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:35:39
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', '.mgr', '.rgw.root', 'vms', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:35:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:35:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:39.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:35:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:40.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:41 compute-0 sshd-session[149662]: Accepted publickey for zuul from 192.168.122.30 port 47520 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:35:41 compute-0 ceph-mon[74318]: pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:41 compute-0 systemd-logind[786]: New session 48 of user zuul.
Jan 21 23:35:41 compute-0 systemd[1]: Started Session 48 of User zuul.
Jan 21 23:35:41 compute-0 sshd-session[149662]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:35:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:41.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:42 compute-0 python3.9[149816]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:35:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:35:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:42.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:43 compute-0 ceph-mon[74318]: pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:43 compute-0 sudo[149971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryousbzxrlvifljevhfwcsfqjjovcdsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038542.823104-62-259749665554687/AnsiballZ_file.py'
Jan 21 23:35:43 compute-0 sudo[149971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:43.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:43 compute-0 python3.9[149973]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:35:43 compute-0 sudo[149971]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:44 compute-0 sudo[150123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdhufrqxvshqezbpwaztebeikaulpxxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038543.7453802-62-198479977053324/AnsiballZ_file.py'
Jan 21 23:35:44 compute-0 sudo[150123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:44 compute-0 python3.9[150125]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:35:44 compute-0 sudo[150123]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:44.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:44 compute-0 sudo[150275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lifnqebrfnacgbslcykjthupeyhrgfzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038544.424421-62-167894760070349/AnsiballZ_file.py'
Jan 21 23:35:44 compute-0 sudo[150275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:45 compute-0 python3.9[150277]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:35:45 compute-0 ceph-mon[74318]: pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:45 compute-0 sudo[150275]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:35:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:45.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:35:45 compute-0 sudo[150428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aobpvsjhxcczixujcxvtybfpmniirwgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038545.2844129-62-132204480499899/AnsiballZ_file.py'
Jan 21 23:35:45 compute-0 sudo[150428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:45 compute-0 sudo[150431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:35:45 compute-0 sudo[150431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:45 compute-0 sudo[150431]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:45 compute-0 python3.9[150430]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:35:45 compute-0 sudo[150428]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:45 compute-0 sudo[150456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:35:45 compute-0 sudo[150456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:45 compute-0 sudo[150456]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:45 compute-0 sudo[150494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:35:45 compute-0 sudo[150494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:45 compute-0 sudo[150494]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:45 compute-0 sudo[150532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:35:45 compute-0 sudo[150532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:46 compute-0 sudo[150693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuxogqsimagpwildsppbhonzgcmfxnrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038545.9386063-62-75987620794052/AnsiballZ_file.py'
Jan 21 23:35:46 compute-0 sudo[150693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:35:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:35:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:35:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:35:46 compute-0 python3.9[150695]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:35:46 compute-0 sudo[150532]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:46 compute-0 sudo[150693]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 21 23:35:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 21 23:35:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 21 23:35:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 21 23:35:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:35:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:46.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:35:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 21 23:35:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:35:47 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:35:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:35:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:35:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:35:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:35:47 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev f7fa1062-5f62-4e8d-809d-832ea4911656 does not exist
Jan 21 23:35:47 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 8cc1e6c5-bd42-45af-a21f-f50d537628cb does not exist
Jan 21 23:35:47 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 7059487f-c9db-482a-a522-f124ed4abf83 does not exist
Jan 21 23:35:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:35:47 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:35:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:35:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:35:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:35:47 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:35:47 compute-0 ceph-mon[74318]: pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:35:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:35:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 21 23:35:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 21 23:35:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 21 23:35:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:35:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:35:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:35:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:35:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:35:47 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:35:47 compute-0 sudo[150864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:35:47 compute-0 sudo[150864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:47 compute-0 sudo[150864]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:47 compute-0 python3.9[150863]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:35:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:35:47 compute-0 sudo[150890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:35:47 compute-0 sudo[150890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:47 compute-0 sudo[150890]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:47.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:47 compute-0 sudo[150935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:35:47 compute-0 sudo[150935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:47 compute-0 sudo[150935]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:47 compute-0 sudo[150964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:35:47 compute-0 sudo[150964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:48 compute-0 podman[151081]: 2026-01-21 23:35:48.00097557 +0000 UTC m=+0.069573123 container create 1ea0ad87421551dafd8942a07deab8fef8485575a360a3482ddcfbe635c70dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Jan 21 23:35:48 compute-0 systemd[1]: Started libpod-conmon-1ea0ad87421551dafd8942a07deab8fef8485575a360a3482ddcfbe635c70dc1.scope.
Jan 21 23:35:48 compute-0 podman[151081]: 2026-01-21 23:35:47.970754427 +0000 UTC m=+0.039352010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:35:48 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:35:48 compute-0 podman[151081]: 2026-01-21 23:35:48.10716321 +0000 UTC m=+0.175760713 container init 1ea0ad87421551dafd8942a07deab8fef8485575a360a3482ddcfbe635c70dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhaskara, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:35:48 compute-0 podman[151081]: 2026-01-21 23:35:48.114586104 +0000 UTC m=+0.183183607 container start 1ea0ad87421551dafd8942a07deab8fef8485575a360a3482ddcfbe635c70dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:35:48 compute-0 podman[151081]: 2026-01-21 23:35:48.1184417 +0000 UTC m=+0.187039283 container attach 1ea0ad87421551dafd8942a07deab8fef8485575a360a3482ddcfbe635c70dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:35:48 compute-0 stoic_bhaskara[151121]: 167 167
Jan 21 23:35:48 compute-0 systemd[1]: libpod-1ea0ad87421551dafd8942a07deab8fef8485575a360a3482ddcfbe635c70dc1.scope: Deactivated successfully.
Jan 21 23:35:48 compute-0 conmon[151121]: conmon 1ea0ad87421551dafd89 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ea0ad87421551dafd8942a07deab8fef8485575a360a3482ddcfbe635c70dc1.scope/container/memory.events
Jan 21 23:35:48 compute-0 podman[151081]: 2026-01-21 23:35:48.125146473 +0000 UTC m=+0.193743996 container died 1ea0ad87421551dafd8942a07deab8fef8485575a360a3482ddcfbe635c70dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:35:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e008ae117ee53b1066fa7206aa5deeb5e3438d8bf892cdb8b54a0f730aea23b5-merged.mount: Deactivated successfully.
Jan 21 23:35:48 compute-0 podman[151081]: 2026-01-21 23:35:48.184426905 +0000 UTC m=+0.253024408 container remove 1ea0ad87421551dafd8942a07deab8fef8485575a360a3482ddcfbe635c70dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhaskara, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:35:48 compute-0 systemd[1]: libpod-conmon-1ea0ad87421551dafd8942a07deab8fef8485575a360a3482ddcfbe635c70dc1.scope: Deactivated successfully.
Jan 21 23:35:48 compute-0 sudo[151190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfbphhrvfsueaftytgezlgzbbxuhqkgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038547.710782-194-46641921721307/AnsiballZ_seboolean.py'
Jan 21 23:35:48 compute-0 sudo[151190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:48 compute-0 podman[151198]: 2026-01-21 23:35:48.393670338 +0000 UTC m=+0.055672523 container create 52fe0457aac2dc83062929178a490ea7abc7480e69ce2e5e0b67b56d756954d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Jan 21 23:35:48 compute-0 systemd[1]: Started libpod-conmon-52fe0457aac2dc83062929178a490ea7abc7480e69ce2e5e0b67b56d756954d9.scope.
Jan 21 23:35:48 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:35:48 compute-0 podman[151198]: 2026-01-21 23:35:48.367120836 +0000 UTC m=+0.029123091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b02c341586bcc1c4d0c7f6124d7b022982a3d0ade769694495b521b606faef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b02c341586bcc1c4d0c7f6124d7b022982a3d0ade769694495b521b606faef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b02c341586bcc1c4d0c7f6124d7b022982a3d0ade769694495b521b606faef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b02c341586bcc1c4d0c7f6124d7b022982a3d0ade769694495b521b606faef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b02c341586bcc1c4d0c7f6124d7b022982a3d0ade769694495b521b606faef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:35:48 compute-0 podman[151198]: 2026-01-21 23:35:48.476457071 +0000 UTC m=+0.138459266 container init 52fe0457aac2dc83062929178a490ea7abc7480e69ce2e5e0b67b56d756954d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brattain, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 23:35:48 compute-0 podman[151198]: 2026-01-21 23:35:48.488126053 +0000 UTC m=+0.150128238 container start 52fe0457aac2dc83062929178a490ea7abc7480e69ce2e5e0b67b56d756954d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 21 23:35:48 compute-0 podman[151198]: 2026-01-21 23:35:48.491721202 +0000 UTC m=+0.153723377 container attach 52fe0457aac2dc83062929178a490ea7abc7480e69ce2e5e0b67b56d756954d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brattain, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 21 23:35:48 compute-0 python3.9[151192]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 21 23:35:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:48.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:49 compute-0 sudo[151190]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:49 compute-0 ceph-mon[74318]: pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:49 compute-0 musing_brattain[151214]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:35:49 compute-0 musing_brattain[151214]: --> relative data size: 1.0
Jan 21 23:35:49 compute-0 musing_brattain[151214]: --> All data devices are unavailable
Jan 21 23:35:49 compute-0 systemd[1]: libpod-52fe0457aac2dc83062929178a490ea7abc7480e69ce2e5e0b67b56d756954d9.scope: Deactivated successfully.
Jan 21 23:35:49 compute-0 podman[151198]: 2026-01-21 23:35:49.416488419 +0000 UTC m=+1.078490684 container died 52fe0457aac2dc83062929178a490ea7abc7480e69ce2e5e0b67b56d756954d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:35:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-71b02c341586bcc1c4d0c7f6124d7b022982a3d0ade769694495b521b606faef-merged.mount: Deactivated successfully.
Jan 21 23:35:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:35:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:49.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:35:49 compute-0 podman[151198]: 2026-01-21 23:35:49.489980761 +0000 UTC m=+1.151982946 container remove 52fe0457aac2dc83062929178a490ea7abc7480e69ce2e5e0b67b56d756954d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:35:49 compute-0 systemd[1]: libpod-conmon-52fe0457aac2dc83062929178a490ea7abc7480e69ce2e5e0b67b56d756954d9.scope: Deactivated successfully.
Jan 21 23:35:49 compute-0 sudo[150964]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:49 compute-0 sudo[151320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:35:49 compute-0 sudo[151320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:49 compute-0 sudo[151320]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:49 compute-0 sudo[151345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:35:49 compute-0 sudo[151345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:49 compute-0 sudo[151345]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:49 compute-0 sudo[151377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:35:49 compute-0 sudo[151377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:49 compute-0 sudo[151377]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:49 compute-0 sudo[151421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:35:49 compute-0 sudo[151421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:50 compute-0 python3.9[151493]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:35:50 compute-0 podman[151555]: 2026-01-21 23:35:50.252982379 +0000 UTC m=+0.059366765 container create df41edc809bdac491f211d8f1f6ce3fd495a84969c832aa85c66e224638d3dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:35:50 compute-0 systemd[1]: Started libpod-conmon-df41edc809bdac491f211d8f1f6ce3fd495a84969c832aa85c66e224638d3dff.scope.
Jan 21 23:35:50 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:35:50 compute-0 podman[151555]: 2026-01-21 23:35:50.228018125 +0000 UTC m=+0.034402531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:35:50 compute-0 podman[151555]: 2026-01-21 23:35:50.338323338 +0000 UTC m=+0.144707804 container init df41edc809bdac491f211d8f1f6ce3fd495a84969c832aa85c66e224638d3dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:35:50 compute-0 ceph-mon[74318]: pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:50 compute-0 podman[151555]: 2026-01-21 23:35:50.35226172 +0000 UTC m=+0.158646096 container start df41edc809bdac491f211d8f1f6ce3fd495a84969c832aa85c66e224638d3dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:35:50 compute-0 podman[151555]: 2026-01-21 23:35:50.355746404 +0000 UTC m=+0.162130820 container attach df41edc809bdac491f211d8f1f6ce3fd495a84969c832aa85c66e224638d3dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:35:50 compute-0 funny_hofstadter[151598]: 167 167
Jan 21 23:35:50 compute-0 systemd[1]: libpod-df41edc809bdac491f211d8f1f6ce3fd495a84969c832aa85c66e224638d3dff.scope: Deactivated successfully.
Jan 21 23:35:50 compute-0 podman[151555]: 2026-01-21 23:35:50.362750586 +0000 UTC m=+0.169135062 container died df41edc809bdac491f211d8f1f6ce3fd495a84969c832aa85c66e224638d3dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Jan 21 23:35:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f23d4e75011cdcc05118a9e72242d593f91db294c8ea4cbeb73615d2dc5f420-merged.mount: Deactivated successfully.
Jan 21 23:35:50 compute-0 podman[151555]: 2026-01-21 23:35:50.40889371 +0000 UTC m=+0.215278116 container remove df41edc809bdac491f211d8f1f6ce3fd495a84969c832aa85c66e224638d3dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 21 23:35:50 compute-0 systemd[1]: libpod-conmon-df41edc809bdac491f211d8f1f6ce3fd495a84969c832aa85c66e224638d3dff.scope: Deactivated successfully.
Jan 21 23:35:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:35:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:50.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:35:50 compute-0 podman[151669]: 2026-01-21 23:35:50.639253692 +0000 UTC m=+0.066123238 container create 54b0f80e9e486755a4c6a486f719bc575d603393230f8693f6a9fa556b29561f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 21 23:35:50 compute-0 systemd[1]: Started libpod-conmon-54b0f80e9e486755a4c6a486f719bc575d603393230f8693f6a9fa556b29561f.scope.
Jan 21 23:35:50 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:35:50 compute-0 podman[151669]: 2026-01-21 23:35:50.615604448 +0000 UTC m=+0.042474024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5f53dc27440cefa885c082119ae415d89041d3d06618deb58141a9f1e1f66b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5f53dc27440cefa885c082119ae415d89041d3d06618deb58141a9f1e1f66b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5f53dc27440cefa885c082119ae415d89041d3d06618deb58141a9f1e1f66b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5f53dc27440cefa885c082119ae415d89041d3d06618deb58141a9f1e1f66b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:35:50 compute-0 podman[151669]: 2026-01-21 23:35:50.727409577 +0000 UTC m=+0.154279193 container init 54b0f80e9e486755a4c6a486f719bc575d603393230f8693f6a9fa556b29561f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:35:50 compute-0 podman[151669]: 2026-01-21 23:35:50.740865544 +0000 UTC m=+0.167735090 container start 54b0f80e9e486755a4c6a486f719bc575d603393230f8693f6a9fa556b29561f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 23:35:50 compute-0 podman[151669]: 2026-01-21 23:35:50.747175235 +0000 UTC m=+0.174044811 container attach 54b0f80e9e486755a4c6a486f719bc575d603393230f8693f6a9fa556b29561f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 21 23:35:50 compute-0 python3.9[151702]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769038549.4029374-218-199664648921598/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:35:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:51.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:51 compute-0 strange_blackwell[151712]: {
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:     "1": [
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:         {
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:             "devices": [
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:                 "/dev/loop3"
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:             ],
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:             "lv_name": "ceph_lv0",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:             "lv_size": "7511998464",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:             "name": "ceph_lv0",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:             "tags": {
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:                 "ceph.cluster_name": "ceph",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:                 "ceph.crush_device_class": "",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:                 "ceph.encrypted": "0",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:                 "ceph.osd_id": "1",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:                 "ceph.type": "block",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:                 "ceph.vdo": "0"
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:             },
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:             "type": "block",
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:             "vg_name": "ceph_vg0"
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:         }
Jan 21 23:35:51 compute-0 strange_blackwell[151712]:     ]
Jan 21 23:35:51 compute-0 strange_blackwell[151712]: }
Jan 21 23:35:51 compute-0 systemd[1]: libpod-54b0f80e9e486755a4c6a486f719bc575d603393230f8693f6a9fa556b29561f.scope: Deactivated successfully.
Jan 21 23:35:51 compute-0 podman[151669]: 2026-01-21 23:35:51.54584221 +0000 UTC m=+0.972711746 container died 54b0f80e9e486755a4c6a486f719bc575d603393230f8693f6a9fa556b29561f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 23:35:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef5f53dc27440cefa885c082119ae415d89041d3d06618deb58141a9f1e1f66b-merged.mount: Deactivated successfully.
Jan 21 23:35:51 compute-0 python3.9[151869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:35:51 compute-0 podman[151669]: 2026-01-21 23:35:51.614747663 +0000 UTC m=+1.041617209 container remove 54b0f80e9e486755a4c6a486f719bc575d603393230f8693f6a9fa556b29561f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:35:51 compute-0 systemd[1]: libpod-conmon-54b0f80e9e486755a4c6a486f719bc575d603393230f8693f6a9fa556b29561f.scope: Deactivated successfully.
Jan 21 23:35:51 compute-0 sudo[151421]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:51 compute-0 sudo[151890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:35:51 compute-0 sudo[151890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:51 compute-0 sudo[151890]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:51 compute-0 sudo[151940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:35:51 compute-0 sudo[151940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:51 compute-0 sudo[151940]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:51 compute-0 sudo[151989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:35:51 compute-0 sudo[151989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:51 compute-0 sudo[151989]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:51 compute-0 sudo[152040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:35:51 compute-0 sudo[152040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:52 compute-0 python3.9[152106]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769038551.059655-263-241182082806987/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:35:52 compute-0 podman[152167]: 2026-01-21 23:35:52.351155469 +0000 UTC m=+0.042804535 container create f6447b1238f45c3a1de0f888ef0080e1b0386415a7b049ab3b8b70b176fe8b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_galileo, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:35:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:35:52 compute-0 systemd[1]: Started libpod-conmon-f6447b1238f45c3a1de0f888ef0080e1b0386415a7b049ab3b8b70b176fe8b74.scope.
Jan 21 23:35:52 compute-0 podman[152167]: 2026-01-21 23:35:52.329174093 +0000 UTC m=+0.020823169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:35:52 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:35:52 compute-0 podman[152167]: 2026-01-21 23:35:52.452943664 +0000 UTC m=+0.144592770 container init f6447b1238f45c3a1de0f888ef0080e1b0386415a7b049ab3b8b70b176fe8b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 23:35:52 compute-0 podman[152167]: 2026-01-21 23:35:52.465242616 +0000 UTC m=+0.156891652 container start f6447b1238f45c3a1de0f888ef0080e1b0386415a7b049ab3b8b70b176fe8b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_galileo, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:35:52 compute-0 podman[152167]: 2026-01-21 23:35:52.468826535 +0000 UTC m=+0.160475601 container attach f6447b1238f45c3a1de0f888ef0080e1b0386415a7b049ab3b8b70b176fe8b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_galileo, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 21 23:35:52 compute-0 jolly_galileo[152186]: 167 167
Jan 21 23:35:52 compute-0 systemd[1]: libpod-f6447b1238f45c3a1de0f888ef0080e1b0386415a7b049ab3b8b70b176fe8b74.scope: Deactivated successfully.
Jan 21 23:35:52 compute-0 podman[152167]: 2026-01-21 23:35:52.474838926 +0000 UTC m=+0.166487992 container died f6447b1238f45c3a1de0f888ef0080e1b0386415a7b049ab3b8b70b176fe8b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_galileo, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 21 23:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-de30820524635230ade614f6f7074c3cad502d501c5f0f2ea50bd653b3fd9d3f-merged.mount: Deactivated successfully.
Jan 21 23:35:52 compute-0 podman[152167]: 2026-01-21 23:35:52.530940371 +0000 UTC m=+0.222589437 container remove f6447b1238f45c3a1de0f888ef0080e1b0386415a7b049ab3b8b70b176fe8b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:35:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:52.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:52 compute-0 systemd[1]: libpod-conmon-f6447b1238f45c3a1de0f888ef0080e1b0386415a7b049ab3b8b70b176fe8b74.scope: Deactivated successfully.
Jan 21 23:35:52 compute-0 podman[152267]: 2026-01-21 23:35:52.77145599 +0000 UTC m=+0.060491129 container create 5b4e0c85a7d77c8952929a62f24079f8c37f43a98884757449f9f512c84c92c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 23:35:52 compute-0 systemd[1]: Started libpod-conmon-5b4e0c85a7d77c8952929a62f24079f8c37f43a98884757449f9f512c84c92c5.scope.
Jan 21 23:35:52 compute-0 podman[152267]: 2026-01-21 23:35:52.742023551 +0000 UTC m=+0.031058730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:35:52 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a6afbf5f2aeb7483e45cab1b08032a9cf83c36cbf061a47f40488193d7ad63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a6afbf5f2aeb7483e45cab1b08032a9cf83c36cbf061a47f40488193d7ad63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a6afbf5f2aeb7483e45cab1b08032a9cf83c36cbf061a47f40488193d7ad63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a6afbf5f2aeb7483e45cab1b08032a9cf83c36cbf061a47f40488193d7ad63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:35:52 compute-0 podman[152267]: 2026-01-21 23:35:52.899674265 +0000 UTC m=+0.188709414 container init 5b4e0c85a7d77c8952929a62f24079f8c37f43a98884757449f9f512c84c92c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bose, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:35:52 compute-0 podman[152267]: 2026-01-21 23:35:52.912988997 +0000 UTC m=+0.202024116 container start 5b4e0c85a7d77c8952929a62f24079f8c37f43a98884757449f9f512c84c92c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bose, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 21 23:35:52 compute-0 podman[152267]: 2026-01-21 23:35:52.916921726 +0000 UTC m=+0.205956905 container attach 5b4e0c85a7d77c8952929a62f24079f8c37f43a98884757449f9f512c84c92c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bose, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:35:52 compute-0 sudo[152356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngapaoubvxhuxwjbzqkxabjcyclrqqhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038552.5905352-314-29639093408687/AnsiballZ_setup.py'
Jan 21 23:35:52 compute-0 sudo[152356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:53 compute-0 ceph-mon[74318]: pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:53 compute-0 python3.9[152358]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:35:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:53.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:53 compute-0 sudo[152356]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:53 compute-0 sleepy_bose[152324]: {
Jan 21 23:35:53 compute-0 sleepy_bose[152324]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:35:53 compute-0 sleepy_bose[152324]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:35:53 compute-0 sleepy_bose[152324]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:35:53 compute-0 sleepy_bose[152324]:         "osd_id": 1,
Jan 21 23:35:53 compute-0 sleepy_bose[152324]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:35:53 compute-0 sleepy_bose[152324]:         "type": "bluestore"
Jan 21 23:35:53 compute-0 sleepy_bose[152324]:     }
Jan 21 23:35:53 compute-0 sleepy_bose[152324]: }
Jan 21 23:35:53 compute-0 systemd[1]: libpod-5b4e0c85a7d77c8952929a62f24079f8c37f43a98884757449f9f512c84c92c5.scope: Deactivated successfully.
Jan 21 23:35:53 compute-0 conmon[152324]: conmon 5b4e0c85a7d77c895292 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b4e0c85a7d77c8952929a62f24079f8c37f43a98884757449f9f512c84c92c5.scope/container/memory.events
Jan 21 23:35:53 compute-0 podman[152267]: 2026-01-21 23:35:53.777356669 +0000 UTC m=+1.066391798 container died 5b4e0c85a7d77c8952929a62f24079f8c37f43a98884757449f9f512c84c92c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bose, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:35:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-32a6afbf5f2aeb7483e45cab1b08032a9cf83c36cbf061a47f40488193d7ad63-merged.mount: Deactivated successfully.
Jan 21 23:35:53 compute-0 podman[152267]: 2026-01-21 23:35:53.835167846 +0000 UTC m=+1.124202935 container remove 5b4e0c85a7d77c8952929a62f24079f8c37f43a98884757449f9f512c84c92c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:35:53 compute-0 systemd[1]: libpod-conmon-5b4e0c85a7d77c8952929a62f24079f8c37f43a98884757449f9f512c84c92c5.scope: Deactivated successfully.
Jan 21 23:35:53 compute-0 sudo[152040]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:35:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:35:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:35:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev ea573c3e-4d46-4b2d-b4aa-5c7736194756 does not exist
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 9eff480b-96d3-4b86-bd27-64e93fa9a583 does not exist
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev a34de7b0-9b4b-4c85-9c4d-8981b700ce8e does not exist
Jan 21 23:35:53 compute-0 sudo[152489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skidiazhxiioygcfytdaneapvtybqpfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038552.5905352-314-29639093408687/AnsiballZ_dnf.py'
Jan 21 23:35:53 compute-0 sudo[152489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:53 compute-0 sudo[152448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:35:53 compute-0 sudo[152448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:53 compute-0 sudo[152448]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:35:53 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:35:53 compute-0 sudo[152496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:35:53 compute-0 sudo[152496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:54 compute-0 sudo[152496]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:54 compute-0 python3.9[152494]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:35:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:54.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:35:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:35:54 compute-0 ceph-mon[74318]: pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:35:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:55.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:35:55 compute-0 sudo[152489]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:56 compute-0 sudo[152672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sedguaxzospzsamsqejsdykyakuxovij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038555.792622-350-100429743792758/AnsiballZ_systemd.py'
Jan 21 23:35:56 compute-0 sudo[152672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:35:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:56.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:56 compute-0 python3.9[152674]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 23:35:56 compute-0 sudo[152672]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:57 compute-0 ceph-mon[74318]: pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:35:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:57.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:57 compute-0 sudo[152703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:35:57 compute-0 sudo[152703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:57 compute-0 sudo[152703]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:57 compute-0 sudo[152728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:35:57 compute-0 sudo[152728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:35:57 compute-0 sudo[152728]: pam_unix(sudo:session): session closed for user root
Jan 21 23:35:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:35:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:35:58.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:35:58 compute-0 python3.9[152878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:35:58 compute-0 ovn_controller[148575]: 2026-01-21T23:35:58Z|00025|memory|INFO|16000 kB peak resident set size after 30.0 seconds
Jan 21 23:35:58 compute-0 ovn_controller[148575]: 2026-01-21T23:35:58Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 21 23:35:59 compute-0 podman[152905]: 2026-01-21 23:35:59.017248754 +0000 UTC m=+0.120655096 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 23:35:59 compute-0 ceph-mon[74318]: pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:35:59 compute-0 python3.9[153024]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769038558.18541-374-146224920423397/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:35:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:35:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:35:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:35:59.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:35:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:00 compute-0 python3.9[153175]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:36:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:36:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:00.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:36:00 compute-0 python3.9[153296]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769038559.527708-374-46510285904970/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:36:01 compute-0 ceph-mon[74318]: pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:36:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:01.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:36:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:02 compute-0 python3.9[153447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:36:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:36:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:02.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:02 compute-0 python3.9[153568]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769038561.5914674-506-208058127710978/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:36:03 compute-0 ceph-mon[74318]: pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:03.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:03 compute-0 python3.9[153719]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:36:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:04 compute-0 python3.9[153840]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769038563.0873363-506-238364900918484/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:36:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:04.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:04 compute-0 python3.9[153990]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:36:05 compute-0 ceph-mon[74318]: pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:05.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:05 compute-0 sudo[154143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvfnviylvvogvnrvnjwpcfjamalprxpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038565.2397804-620-91724626355069/AnsiballZ_file.py'
Jan 21 23:36:05 compute-0 sudo[154143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:05 compute-0 python3.9[154145]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:36:05 compute-0 sudo[154143]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:06 compute-0 sudo[154295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbugfziezptibcmxcjlmouoxjmuolaqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038566.1080325-644-77092311365499/AnsiballZ_stat.py'
Jan 21 23:36:06 compute-0 sudo[154295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:36:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:06.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:36:06 compute-0 python3.9[154297]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:36:06 compute-0 sudo[154295]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:07 compute-0 ceph-mon[74318]: pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:07 compute-0 sudo[154373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlnmxvdjnxvmzzipighrhppdvbedoama ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038566.1080325-644-77092311365499/AnsiballZ_file.py'
Jan 21 23:36:07 compute-0 sudo[154373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:07 compute-0 python3.9[154375]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:36:07 compute-0 sudo[154373]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:36:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:07.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:07 compute-0 sudo[154526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxgevskagyzlwpududyrmvxtmdgmvadu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038567.5597064-644-244092983158310/AnsiballZ_stat.py'
Jan 21 23:36:07 compute-0 sudo[154526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:08 compute-0 python3.9[154528]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:36:08 compute-0 sudo[154526]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:08 compute-0 sudo[154604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idwtetfeclcbhtgxozhetlivkyzctnrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038567.5597064-644-244092983158310/AnsiballZ_file.py'
Jan 21 23:36:08 compute-0 sudo[154604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:36:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:08.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:36:08 compute-0 python3.9[154606]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:36:08 compute-0 sudo[154604]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:09 compute-0 ceph-mon[74318]: pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:09 compute-0 sudo[154756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pntpxslqybftqcrqswgxjegjigcsgccq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038568.788703-713-150819535139363/AnsiballZ_file.py'
Jan 21 23:36:09 compute-0 sudo[154756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:36:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:36:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:36:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:36:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:36:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:36:09 compute-0 python3.9[154758]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:36:09 compute-0 sudo[154756]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:09.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:09 compute-0 sudo[154909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymkpveivxygaisetzjigbhalnxytqlbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038569.5669992-737-119090589154556/AnsiballZ_stat.py'
Jan 21 23:36:09 compute-0 sudo[154909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:10 compute-0 python3.9[154911]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:36:10 compute-0 sudo[154909]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:10 compute-0 sudo[154987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaonyenxtupiigfutxrjgrabalsvodes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038569.5669992-737-119090589154556/AnsiballZ_file.py'
Jan 21 23:36:10 compute-0 sudo[154987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:36:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:10.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:36:10 compute-0 python3.9[154989]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:36:10 compute-0 sudo[154987]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:11 compute-0 ceph-mon[74318]: pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:11 compute-0 sudo[155140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtgehnqouopqesgdzmvvvwnmexbonhdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038570.8972764-773-99501978353587/AnsiballZ_stat.py'
Jan 21 23:36:11 compute-0 sudo[155140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:11.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:11 compute-0 python3.9[155142]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:36:11 compute-0 sudo[155140]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:11 compute-0 sudo[155218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chwkgvkvndmzkcausvvbmkplhdnoghdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038570.8972764-773-99501978353587/AnsiballZ_file.py'
Jan 21 23:36:11 compute-0 sudo[155218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:12 compute-0 python3.9[155220]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:36:12 compute-0 sudo[155218]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:36:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:12.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:12 compute-0 sudo[155370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iacljcqwrkglvnbkkvsgxtubhvwuvrhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038572.3584309-809-234689199164127/AnsiballZ_systemd.py'
Jan 21 23:36:12 compute-0 sudo[155370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:13 compute-0 python3.9[155372]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:36:13 compute-0 systemd[1]: Reloading.
Jan 21 23:36:13 compute-0 systemd-rc-local-generator[155393]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:36:13 compute-0 systemd-sysv-generator[155397]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:36:13 compute-0 ceph-mon[74318]: pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:13 compute-0 sudo[155370]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:13.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:14 compute-0 sudo[155559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihbdsvclsiwwemblrhvdnkzghxhkwcoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038573.7373903-833-106095216452076/AnsiballZ_stat.py'
Jan 21 23:36:14 compute-0 sudo[155559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:14 compute-0 python3.9[155561]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:36:14 compute-0 sudo[155559]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:14.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:14 compute-0 sudo[155637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqtipxrwhvhtqhpnblalhpacsodrxace ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038573.7373903-833-106095216452076/AnsiballZ_file.py'
Jan 21 23:36:14 compute-0 sudo[155637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:14 compute-0 python3.9[155639]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:36:14 compute-0 sudo[155637]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:15 compute-0 ceph-mon[74318]: pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:15.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:15 compute-0 sudo[155790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nflibizhowjhwljrlhmaktrptvvijnyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038575.1183748-869-241930851108227/AnsiballZ_stat.py'
Jan 21 23:36:15 compute-0 sudo[155790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:15 compute-0 python3.9[155792]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:36:15 compute-0 sudo[155790]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:16 compute-0 sudo[155868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oogwmcfepyzpvcbchnrgzdhofxrfurts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038575.1183748-869-241930851108227/AnsiballZ_file.py'
Jan 21 23:36:16 compute-0 sudo[155868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:16 compute-0 python3.9[155870]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:36:16 compute-0 sudo[155868]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:16.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:17 compute-0 sudo[156020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owrcjhmeuazfyscvhjcetgiznxosrtsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038576.6477175-905-53591316504747/AnsiballZ_systemd.py'
Jan 21 23:36:17 compute-0 sudo[156020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:17 compute-0 ceph-mon[74318]: pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:17 compute-0 python3.9[156022]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:36:17 compute-0 systemd[1]: Reloading.
Jan 21 23:36:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:36:17 compute-0 systemd-sysv-generator[156047]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:36:17 compute-0 systemd-rc-local-generator[156042]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:36:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:17.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:17 compute-0 systemd[1]: Starting Create netns directory...
Jan 21 23:36:17 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 21 23:36:17 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 21 23:36:17 compute-0 systemd[1]: Finished Create netns directory.
Jan 21 23:36:17 compute-0 sudo[156020]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:18 compute-0 sudo[156090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:36:18 compute-0 sudo[156090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:18 compute-0 sudo[156090]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:18 compute-0 sudo[156115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:36:18 compute-0 sudo[156115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:18 compute-0 sudo[156115]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:18 compute-0 sudo[156265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neaksfdtdbdnuwmpunwxvpxttmwwtrux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038578.1547065-935-103118598571236/AnsiballZ_file.py'
Jan 21 23:36:18 compute-0 sudo[156265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:36:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:18.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:36:18 compute-0 python3.9[156267]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:36:18 compute-0 sudo[156265]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:19 compute-0 sudo[156417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chexsreffvyrpaivszdfobuvxejlzqfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038578.9467216-959-180413149770647/AnsiballZ_stat.py'
Jan 21 23:36:19 compute-0 sudo[156417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:19 compute-0 ceph-mon[74318]: pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:19 compute-0 python3.9[156419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:36:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:36:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:19.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:36:19 compute-0 sudo[156417]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:20 compute-0 sudo[156541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqhlvqlkfkjpxdjqjvrpzscvtegefxbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038578.9467216-959-180413149770647/AnsiballZ_copy.py'
Jan 21 23:36:20 compute-0 sudo[156541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:20 compute-0 python3.9[156543]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769038578.9467216-959-180413149770647/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:36:20 compute-0 sudo[156541]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:20 compute-0 ceph-mon[74318]: pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:20.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:21 compute-0 sudo[156693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnvymcifdtjxktladfhhfvntwpzvwkwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038580.7857585-1010-218249260550564/AnsiballZ_file.py'
Jan 21 23:36:21 compute-0 sudo[156693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:21 compute-0 python3.9[156695]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:36:21 compute-0 sudo[156693]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:21.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:21 compute-0 sudo[156846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzdnolmwjyiovfjlylnjcbssuomqieod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038581.5563552-1034-230362524417538/AnsiballZ_file.py'
Jan 21 23:36:21 compute-0 sudo[156846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:22 compute-0 python3.9[156848]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:36:22 compute-0 sudo[156846]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:36:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:22.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:22 compute-0 sudo[156998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clpyiijaqxabopbzvenpygjxdzsysmsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038582.3783529-1058-220400980815482/AnsiballZ_stat.py'
Jan 21 23:36:22 compute-0 sudo[156998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:22 compute-0 python3.9[157000]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:36:22 compute-0 sudo[156998]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:23 compute-0 ceph-mon[74318]: pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:23 compute-0 sudo[157122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibneovrrgseqlvjcwobydywoqnxmwmoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038582.3783529-1058-220400980815482/AnsiballZ_copy.py'
Jan 21 23:36:23 compute-0 sudo[157122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:36:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:23.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:36:23 compute-0 python3.9[157124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769038582.3783529-1058-220400980815482/.source.json _original_basename=.d5buh69a follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:36:23 compute-0 sudo[157122]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:24 compute-0 python3.9[157274]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:36:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:36:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:24.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:36:25 compute-0 ceph-mon[74318]: pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:25.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:26.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:26 compute-0 sudo[157696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdvalbnqryzancoycynimxyetwvoonqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038586.5096593-1178-50158346808147/AnsiballZ_container_config_data.py'
Jan 21 23:36:26 compute-0 sudo[157696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:27 compute-0 ceph-mon[74318]: pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:27 compute-0 python3.9[157698]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 21 23:36:27 compute-0 sudo[157696]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:36:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:27.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:28 compute-0 sudo[157849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auwuovbkvrvzzmumeqaevpvyepjcultp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038587.6334934-1211-136203527014302/AnsiballZ_container_config_hash.py'
Jan 21 23:36:28 compute-0 sudo[157849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:28 compute-0 python3.9[157851]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 21 23:36:28 compute-0 sudo[157849]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:28.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:29 compute-0 ceph-mon[74318]: pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:29 compute-0 sudo[158020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjqldtliwjjsuuviaknsjvzsgzikekex ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769038588.8972-1241-197499105543359/AnsiballZ_edpm_container_manage.py'
Jan 21 23:36:29 compute-0 sudo[158020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:29 compute-0 podman[157975]: 2026-01-21 23:36:29.480841797 +0000 UTC m=+0.141477763 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 23:36:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:29.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:29 compute-0 python3[158025]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 21 23:36:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:30.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:31 compute-0 ceph-mon[74318]: pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:36:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:31.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:36:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:36:32 compute-0 ceph-mon[74318]: pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:32.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:33.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:34.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:35 compute-0 ceph-mon[74318]: pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:36:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:35.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:36:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:36.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:36 compute-0 ceph-mon[74318]: pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:36:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:36:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:37.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:36:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:38 compute-0 sudo[158131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:36:38 compute-0 sudo[158131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:38 compute-0 sudo[158131]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:38 compute-0 sudo[158156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:36:38 compute-0 sudo[158156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:38 compute-0 sudo[158156]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:38.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:38 compute-0 ceph-mon[74318]: pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:36:39
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['images', '.rgw.root', 'default.rgw.control', '.mgr', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups']
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:36:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:39.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:39 compute-0 podman[158045]: 2026-01-21 23:36:39.741129972 +0000 UTC m=+9.920271574 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 21 23:36:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:39 compute-0 podman[158224]: 2026-01-21 23:36:39.97987488 +0000 UTC m=+0.073281053 container create b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 21 23:36:39 compute-0 podman[158224]: 2026-01-21 23:36:39.938162358 +0000 UTC m=+0.031568581 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 21 23:36:39 compute-0 python3[158025]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 21 23:36:40 compute-0 sudo[158020]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:40.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:41 compute-0 ceph-mon[74318]: pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:41.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:41 compute-0 sudo[158412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lthcziowiczihutcovhazqjhkgouzyax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038601.4002523-1265-210407916840666/AnsiballZ_stat.py'
Jan 21 23:36:41 compute-0 sudo[158412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:41 compute-0 python3.9[158414]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:36:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:41 compute-0 sudo[158412]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:36:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:36:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:42.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:36:42 compute-0 sudo[158566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlsknkvkggzfbhnzkramcqcegbxsidfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038602.2835495-1292-135480574225950/AnsiballZ_file.py'
Jan 21 23:36:42 compute-0 sudo[158566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:42 compute-0 python3.9[158568]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:36:42 compute-0 sudo[158566]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:43 compute-0 ceph-mon[74318]: pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:43 compute-0 sudo[158642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xheiugyvzqeifusfynusgoyfudncbzpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038602.2835495-1292-135480574225950/AnsiballZ_stat.py'
Jan 21 23:36:43 compute-0 sudo[158642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:43 compute-0 python3.9[158644]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:36:43 compute-0 sudo[158642]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:43.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:43 compute-0 sudo[158794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpnpmlvpayixjjtheiscfhoejoswkqrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038603.41863-1292-75176765504294/AnsiballZ_copy.py'
Jan 21 23:36:43 compute-0 sudo[158794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:44 compute-0 python3.9[158796]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769038603.41863-1292-75176765504294/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:36:44 compute-0 sudo[158794]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:44 compute-0 sudo[158870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sevubumbugrtdmgalrclhifnlgvirhex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038603.41863-1292-75176765504294/AnsiballZ_systemd.py'
Jan 21 23:36:44 compute-0 sudo[158870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:36:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:44.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:36:44 compute-0 python3.9[158872]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 23:36:44 compute-0 systemd[1]: Reloading.
Jan 21 23:36:44 compute-0 systemd-sysv-generator[158897]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:36:44 compute-0 systemd-rc-local-generator[158894]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:36:45 compute-0 sudo[158870]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:45 compute-0 sudo[158986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbqdsonmdzmdwkkcyvmptgltrznydjhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038603.41863-1292-75176765504294/AnsiballZ_systemd.py'
Jan 21 23:36:45 compute-0 sudo[158986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:45.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:45 compute-0 ceph-mon[74318]: pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:45 compute-0 python3.9[158988]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:36:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:46 compute-0 systemd[1]: Reloading.
Jan 21 23:36:46 compute-0 systemd-sysv-generator[159020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:36:46 compute-0 systemd-rc-local-generator[159015]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:36:46 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 21 23:36:46 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3593d5c3986d0d6b1b951734754cfac27e384a1b321839947a7d091af357b1/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 21 23:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3593d5c3986d0d6b1b951734754cfac27e384a1b321839947a7d091af357b1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 21 23:36:46 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb.
Jan 21 23:36:46 compute-0 podman[159029]: 2026-01-21 23:36:46.534665322 +0000 UTC m=+0.174696351 container init b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: + sudo -E kolla_set_configs
Jan 21 23:36:46 compute-0 podman[159029]: 2026-01-21 23:36:46.570088698 +0000 UTC m=+0.210119697 container start b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 21 23:36:46 compute-0 edpm-start-podman-container[159029]: ovn_metadata_agent
Jan 21 23:36:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:36:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:46.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:36:46 compute-0 podman[159052]: 2026-01-21 23:36:46.641649957 +0000 UTC m=+0.057700018 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: INFO:__main__:Validating config file
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 21 23:36:46 compute-0 ceph-mon[74318]: pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: INFO:__main__:Copying service configuration files
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: INFO:__main__:Writing out command to execute
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: ++ cat /run_command
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: + CMD=neutron-ovn-metadata-agent
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: + ARGS=
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: + sudo kolla_copy_cacerts
Jan 21 23:36:46 compute-0 edpm-start-podman-container[159028]: Creating additional drop-in dependency for "ovn_metadata_agent" (b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb)
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: + [[ ! -n '' ]]
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: + . kolla_extend_start
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: Running command: 'neutron-ovn-metadata-agent'
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: + umask 0022
Jan 21 23:36:46 compute-0 ovn_metadata_agent[159045]: + exec neutron-ovn-metadata-agent
Jan 21 23:36:46 compute-0 systemd[1]: Reloading.
Jan 21 23:36:46 compute-0 systemd-rc-local-generator[159122]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:36:46 compute-0 systemd-sysv-generator[159126]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:36:46 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 21 23:36:46 compute-0 sudo[158986]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:36:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:47.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:47 compute-0 python3.9[159283]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 21 23:36:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:36:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:48.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.692 159050 INFO neutron.common.config [-] Logging enabled!
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.692 159050 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.692 159050 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.693 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.693 159050 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.693 159050 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.693 159050 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.693 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.693 159050 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.693 159050 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.693 159050 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.694 159050 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.694 159050 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.694 159050 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.694 159050 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.694 159050 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.694 159050 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.694 159050 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.694 159050 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.694 159050 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.695 159050 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.695 159050 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.695 159050 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.695 159050 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.695 159050 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.695 159050 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.695 159050 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.695 159050 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.695 159050 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.695 159050 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.696 159050 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.696 159050 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.696 159050 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.696 159050 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.696 159050 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.696 159050 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.696 159050 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.696 159050 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.696 159050 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.697 159050 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.697 159050 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.697 159050 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.697 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.697 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.697 159050 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.697 159050 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.697 159050 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.697 159050 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.697 159050 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.698 159050 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.698 159050 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.698 159050 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.698 159050 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.698 159050 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.698 159050 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.698 159050 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.698 159050 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.698 159050 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.698 159050 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.699 159050 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.699 159050 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.699 159050 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.699 159050 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.699 159050 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.699 159050 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.699 159050 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.699 159050 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.699 159050 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.700 159050 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.700 159050 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.700 159050 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.700 159050 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.700 159050 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.700 159050 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.700 159050 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.700 159050 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.700 159050 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.700 159050 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.701 159050 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.701 159050 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.701 159050 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.701 159050 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.701 159050 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.701 159050 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.701 159050 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.701 159050 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.701 159050 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.702 159050 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.702 159050 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.702 159050 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.702 159050 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.702 159050 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.702 159050 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.702 159050 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.702 159050 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.702 159050 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.703 159050 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.703 159050 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.703 159050 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.703 159050 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.703 159050 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.703 159050 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.703 159050 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.703 159050 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.703 159050 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.703 159050 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.703 159050 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.704 159050 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.704 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.704 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.704 159050 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.704 159050 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.704 159050 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.704 159050 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.704 159050 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.704 159050 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.705 159050 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.705 159050 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.705 159050 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.705 159050 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.705 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.705 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.705 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.705 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.705 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.706 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.706 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.706 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.706 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.706 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.706 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.706 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.706 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.706 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.706 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.707 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.707 159050 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.707 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.707 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.707 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.707 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.707 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.707 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.707 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.708 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.708 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.708 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.708 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.708 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.708 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.708 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.708 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.708 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.708 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.709 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.709 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.709 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.709 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.709 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.709 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.709 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.709 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.709 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.710 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.710 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.710 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.710 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.710 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.710 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.710 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.710 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.710 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.710 159050 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.711 159050 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.711 159050 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.711 159050 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.711 159050 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.711 159050 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.711 159050 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.711 159050 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.711 159050 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.711 159050 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.711 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.712 159050 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.712 159050 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.712 159050 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.712 159050 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.712 159050 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.712 159050 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.712 159050 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.712 159050 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.712 159050 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.712 159050 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.713 159050 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.713 159050 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.713 159050 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.713 159050 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.713 159050 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.713 159050 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.713 159050 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.713 159050 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.713 159050 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.714 159050 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.714 159050 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.714 159050 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.714 159050 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.714 159050 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.714 159050 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.714 159050 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.714 159050 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.714 159050 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.714 159050 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.715 159050 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.715 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.715 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.715 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.715 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.715 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.715 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.715 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.715 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.715 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.716 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.716 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.716 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.716 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.716 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.716 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.716 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.716 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.716 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.717 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.717 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.717 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.717 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.717 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.717 159050 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.717 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.717 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.717 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.717 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.718 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.718 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.718 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.718 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.718 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.718 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.718 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.718 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.718 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.719 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.719 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.719 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.719 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.719 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.719 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.719 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.719 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.719 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.720 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.720 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.720 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.720 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.720 159050 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.720 159050 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.720 159050 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.720 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.720 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.720 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.721 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.721 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.721 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.721 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.721 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.721 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.721 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.721 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.721 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.722 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.722 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.722 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.722 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.722 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.722 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.722 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.722 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.722 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.723 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.723 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.723 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.723 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.723 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.723 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.723 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.723 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.723 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.723 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.724 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.724 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.724 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.724 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.724 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.724 159050 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.724 159050 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.733 159050 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.733 159050 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.733 159050 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.733 159050 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.749 159050 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.763 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c2a76040-4536-46ac-93c9-20aa76f22ff4 (UUID: c2a76040-4536-46ac-93c9-20aa76f22ff4) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.789 159050 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.789 159050 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.789 159050 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.789 159050 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.792 159050 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.798 159050 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.803 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c2a76040-4536-46ac-93c9-20aa76f22ff4'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f82572918e0>], external_ids={}, name=c2a76040-4536-46ac-93c9-20aa76f22ff4, nb_cfg_timestamp=1769038537042, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.804 159050 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f8257282f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.804 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.805 159050 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.805 159050 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.805 159050 INFO oslo_service.service [-] Starting 1 workers
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.809 159050 DEBUG oslo_service.service [-] Started child 159385 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.813 159050 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmprjf5wv01/privsep.sock']
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.816 159385 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-425183'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.855 159385 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.856 159385 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.857 159385 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.863 159385 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.871 159385 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 21 23:36:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:48.882 159385 INFO eventlet.wsgi.server [-] (159385) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 21 23:36:48 compute-0 sudo[159437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myadqvwbhbcbkrpwhzgzypolxgsrfftm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038608.6027586-1427-44780002291060/AnsiballZ_stat.py'
Jan 21 23:36:48 compute-0 sudo[159437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:49 compute-0 python3.9[159439]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:36:49 compute-0 sudo[159437]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:49 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 21 23:36:49 compute-0 ceph-mon[74318]: pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:36:49 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:49.468 159050 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 21 23:36:49 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:49.470 159050 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmprjf5wv01/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 21 23:36:49 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:49.334 159491 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 21 23:36:49 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:49.339 159491 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 21 23:36:49 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:49.342 159491 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 21 23:36:49 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:49.342 159491 INFO oslo.privsep.daemon [-] privsep daemon running as pid 159491
Jan 21 23:36:49 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:49.474 159491 DEBUG oslo.privsep.daemon [-] privsep: reply[5a110bfe-43f6-4f01-9400-62e5a714eaf3]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 21 23:36:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:49.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:49 compute-0 sudo[159568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvfwsqyvrflmtyfwfjrtavypopzpgmkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038608.6027586-1427-44780002291060/AnsiballZ_copy.py'
Jan 21 23:36:49 compute-0 sudo[159568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:49 compute-0 python3.9[159570]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769038608.6027586-1427-44780002291060/.source.yaml _original_basename=.z9ztc8xp follow=False checksum=cfeef86b14b878de0f294a1373deced41a393eeb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:36:49 compute-0 sudo[159568]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 43 KiB/s rd, 0 B/s wr, 72 op/s
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.009 159491 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.009 159491 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.009 159491 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:36:50 compute-0 ceph-mon[74318]: pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 43 KiB/s rd, 0 B/s wr, 72 op/s
Jan 21 23:36:50 compute-0 sshd-session[149665]: Connection closed by 192.168.122.30 port 47520
Jan 21 23:36:50 compute-0 sshd-session[149662]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:36:50 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Jan 21 23:36:50 compute-0 systemd[1]: session-48.scope: Consumed 1min 5.192s CPU time.
Jan 21 23:36:50 compute-0 systemd-logind[786]: Session 48 logged out. Waiting for processes to exit.
Jan 21 23:36:50 compute-0 systemd-logind[786]: Removed session 48.
Jan 21 23:36:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:50.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.671 159491 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec77b7b-b2e9-4134-b8ef-5a3b09bdf838]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.674 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, column=external_ids, values=({'neutron:ovn-metadata-id': '23ada6d4-99c6-5b92-9480-dca9457d5ccb'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.702 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.750 159050 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.750 159050 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.751 159050 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.751 159050 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.751 159050 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.751 159050 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.751 159050 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.751 159050 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.751 159050 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.752 159050 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.752 159050 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.752 159050 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.752 159050 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.752 159050 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.752 159050 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.753 159050 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.753 159050 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.753 159050 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.753 159050 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.753 159050 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.753 159050 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.754 159050 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.754 159050 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.754 159050 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.754 159050 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.754 159050 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.755 159050 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.755 159050 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.755 159050 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.755 159050 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.755 159050 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.755 159050 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.755 159050 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.756 159050 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.756 159050 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.756 159050 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.756 159050 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.756 159050 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.757 159050 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.757 159050 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.757 159050 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.757 159050 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.757 159050 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.757 159050 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.758 159050 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.758 159050 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.758 159050 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.758 159050 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.758 159050 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.758 159050 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.758 159050 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.759 159050 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.759 159050 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.759 159050 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.759 159050 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.759 159050 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.759 159050 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.759 159050 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.760 159050 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.760 159050 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.760 159050 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.760 159050 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.760 159050 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.760 159050 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.760 159050 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.761 159050 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.761 159050 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.761 159050 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.761 159050 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.761 159050 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.761 159050 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.761 159050 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.762 159050 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.762 159050 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.762 159050 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.762 159050 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.762 159050 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.762 159050 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.763 159050 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.763 159050 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.763 159050 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.763 159050 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.763 159050 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.763 159050 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.763 159050 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.764 159050 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.764 159050 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.764 159050 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.764 159050 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.764 159050 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.764 159050 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.764 159050 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.764 159050 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.765 159050 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.765 159050 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.765 159050 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.765 159050 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.765 159050 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.765 159050 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.765 159050 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.766 159050 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.766 159050 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.766 159050 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.766 159050 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.766 159050 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.766 159050 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.766 159050 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.767 159050 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.767 159050 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.767 159050 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.767 159050 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.767 159050 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.768 159050 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.768 159050 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.768 159050 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.768 159050 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.768 159050 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.768 159050 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.768 159050 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.769 159050 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.769 159050 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.769 159050 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.769 159050 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.769 159050 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.769 159050 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.770 159050 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.770 159050 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.770 159050 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.770 159050 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.770 159050 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.770 159050 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.771 159050 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.771 159050 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.771 159050 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.771 159050 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.771 159050 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.771 159050 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.772 159050 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.772 159050 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.772 159050 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.772 159050 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.772 159050 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.772 159050 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.772 159050 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.773 159050 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.773 159050 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.773 159050 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.773 159050 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.773 159050 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.773 159050 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.773 159050 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.774 159050 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.774 159050 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.774 159050 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.774 159050 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.774 159050 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.774 159050 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.774 159050 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.774 159050 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.775 159050 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.775 159050 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.775 159050 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.775 159050 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.775 159050 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.775 159050 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.775 159050 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.776 159050 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.776 159050 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.776 159050 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.776 159050 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.776 159050 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.776 159050 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.776 159050 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.777 159050 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.777 159050 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.777 159050 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.777 159050 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.777 159050 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.777 159050 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.777 159050 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.778 159050 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.778 159050 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.778 159050 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.778 159050 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.778 159050 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.778 159050 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.779 159050 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.779 159050 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.779 159050 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.779 159050 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.779 159050 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.779 159050 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.780 159050 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.780 159050 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.780 159050 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.780 159050 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.780 159050 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.780 159050 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.780 159050 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.781 159050 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.781 159050 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.781 159050 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.781 159050 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.781 159050 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.781 159050 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.781 159050 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.781 159050 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.782 159050 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.782 159050 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.782 159050 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.782 159050 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.782 159050 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.782 159050 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.782 159050 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.783 159050 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.783 159050 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.783 159050 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.783 159050 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.783 159050 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.783 159050 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.783 159050 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.783 159050 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.784 159050 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.784 159050 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.784 159050 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.784 159050 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.784 159050 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.784 159050 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.784 159050 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.785 159050 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.785 159050 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.785 159050 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.785 159050 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.785 159050 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.785 159050 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.785 159050 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.785 159050 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.786 159050 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.786 159050 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.786 159050 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.786 159050 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.786 159050 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.786 159050 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.787 159050 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.787 159050 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.787 159050 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.787 159050 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.787 159050 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.787 159050 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.787 159050 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.788 159050 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.788 159050 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.788 159050 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.788 159050 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.788 159050 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.788 159050 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.788 159050 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.789 159050 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.789 159050 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.789 159050 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.789 159050 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.789 159050 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.789 159050 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.789 159050 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.789 159050 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.790 159050 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.790 159050 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.790 159050 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.790 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.790 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.790 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.791 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.791 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.791 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.791 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.791 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.791 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.791 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.792 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.792 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.792 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.792 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.792 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.792 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.793 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.793 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.793 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.793 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.793 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.793 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.793 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.793 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.794 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.794 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.794 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.794 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.794 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.794 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.794 159050 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.795 159050 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.795 159050 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.795 159050 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.795 159050 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:36:50 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:36:50.795 159050 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 21 23:36:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:51.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 69 KiB/s rd, 0 B/s wr, 115 op/s
Jan 21 23:36:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:36:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:36:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:52.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:36:53 compute-0 ceph-mon[74318]: pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 69 KiB/s rd, 0 B/s wr, 115 op/s
Jan 21 23:36:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:53.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 80 KiB/s rd, 0 B/s wr, 134 op/s
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:36:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:36:54 compute-0 sudo[159598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:36:54 compute-0 sudo[159598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:54 compute-0 sudo[159598]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:54 compute-0 sudo[159623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:36:54 compute-0 sudo[159623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:54 compute-0 sudo[159623]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:36:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:54.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:36:54 compute-0 sudo[159648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:36:54 compute-0 sudo[159648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:54 compute-0 sudo[159648]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:54 compute-0 sudo[159673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:36:54 compute-0 sudo[159673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:55 compute-0 ceph-mon[74318]: pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 80 KiB/s rd, 0 B/s wr, 134 op/s
Jan 21 23:36:55 compute-0 sudo[159673]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 21 23:36:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 23:36:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:36:55 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:36:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:36:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:36:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:36:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:36:55 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 02e2c8d8-c97b-4470-8566-1aefd888dd5c does not exist
Jan 21 23:36:55 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 5e743ea2-84a4-4b05-a3a3-f33b78a04cdc does not exist
Jan 21 23:36:55 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev fc2bb726-3e09-48e7-964d-b667f35bc246 does not exist
Jan 21 23:36:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:36:55 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:36:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:36:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:36:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:36:55 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:36:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:55.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:55 compute-0 sudo[159729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:36:55 compute-0 sudo[159729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:55 compute-0 sudo[159729]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:55 compute-0 sudo[159754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:36:55 compute-0 sudo[159754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:55 compute-0 sudo[159754]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:55 compute-0 sudo[159779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:36:55 compute-0 sudo[159779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:55 compute-0 sudo[159779]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:55 compute-0 sudo[159804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:36:55 compute-0 sudo[159804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 144 op/s
Jan 21 23:36:56 compute-0 podman[159869]: 2026-01-21 23:36:56.186129523 +0000 UTC m=+0.062358517 container create 32da3c59ae14c4060d55aedf4ef2c228975c6c7aa3b9d551796277f97858f96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_taussig, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:36:56 compute-0 sshd-session[159868]: Accepted publickey for zuul from 192.168.122.30 port 39256 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:36:56 compute-0 systemd-logind[786]: New session 49 of user zuul.
Jan 21 23:36:56 compute-0 systemd[1]: Started libpod-conmon-32da3c59ae14c4060d55aedf4ef2c228975c6c7aa3b9d551796277f97858f96c.scope.
Jan 21 23:36:56 compute-0 systemd[1]: Started Session 49 of User zuul.
Jan 21 23:36:56 compute-0 sshd-session[159868]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:36:56 compute-0 podman[159869]: 2026-01-21 23:36:56.159149299 +0000 UTC m=+0.035378343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:36:56 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:36:56 compute-0 podman[159869]: 2026-01-21 23:36:56.280456491 +0000 UTC m=+0.156685485 container init 32da3c59ae14c4060d55aedf4ef2c228975c6c7aa3b9d551796277f97858f96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_taussig, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:36:56 compute-0 podman[159869]: 2026-01-21 23:36:56.287964244 +0000 UTC m=+0.164193208 container start 32da3c59ae14c4060d55aedf4ef2c228975c6c7aa3b9d551796277f97858f96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_taussig, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 21 23:36:56 compute-0 podman[159869]: 2026-01-21 23:36:56.293262453 +0000 UTC m=+0.169491427 container attach 32da3c59ae14c4060d55aedf4ef2c228975c6c7aa3b9d551796277f97858f96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 21 23:36:56 compute-0 awesome_taussig[159887]: 167 167
Jan 21 23:36:56 compute-0 podman[159869]: 2026-01-21 23:36:56.298795418 +0000 UTC m=+0.175024382 container died 32da3c59ae14c4060d55aedf4ef2c228975c6c7aa3b9d551796277f97858f96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_taussig, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:36:56 compute-0 systemd[1]: libpod-32da3c59ae14c4060d55aedf4ef2c228975c6c7aa3b9d551796277f97858f96c.scope: Deactivated successfully.
Jan 21 23:36:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 23:36:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:36:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:36:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:36:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:36:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:36:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:36:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-591ec6f33a46ea050b7c108366f141687c89ff390779168b5015d970210d448f-merged.mount: Deactivated successfully.
Jan 21 23:36:56 compute-0 podman[159869]: 2026-01-21 23:36:56.344406225 +0000 UTC m=+0.220635179 container remove 32da3c59ae14c4060d55aedf4ef2c228975c6c7aa3b9d551796277f97858f96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_taussig, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 21 23:36:56 compute-0 systemd[1]: libpod-conmon-32da3c59ae14c4060d55aedf4ef2c228975c6c7aa3b9d551796277f97858f96c.scope: Deactivated successfully.
Jan 21 23:36:56 compute-0 podman[159964]: 2026-01-21 23:36:56.576037011 +0000 UTC m=+0.057506673 container create 23d1d54a2c14fea6dbb6fa0cdd0ce1bc1e2445f46f0fcc5dfaddd218f4d6da3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_feistel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 23:36:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:56.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:56 compute-0 systemd[1]: Started libpod-conmon-23d1d54a2c14fea6dbb6fa0cdd0ce1bc1e2445f46f0fcc5dfaddd218f4d6da3f.scope.
Jan 21 23:36:56 compute-0 podman[159964]: 2026-01-21 23:36:56.555689655 +0000 UTC m=+0.037159357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:36:56 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:36:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf48bb7e76b43e7024d1c192bdbb03fe681aa1426e5e691e4a086e6edce6e4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:36:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf48bb7e76b43e7024d1c192bdbb03fe681aa1426e5e691e4a086e6edce6e4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:36:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf48bb7e76b43e7024d1c192bdbb03fe681aa1426e5e691e4a086e6edce6e4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:36:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf48bb7e76b43e7024d1c192bdbb03fe681aa1426e5e691e4a086e6edce6e4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:36:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf48bb7e76b43e7024d1c192bdbb03fe681aa1426e5e691e4a086e6edce6e4f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:36:56 compute-0 podman[159964]: 2026-01-21 23:36:56.669515093 +0000 UTC m=+0.150984795 container init 23d1d54a2c14fea6dbb6fa0cdd0ce1bc1e2445f46f0fcc5dfaddd218f4d6da3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 23:36:56 compute-0 podman[159964]: 2026-01-21 23:36:56.684650044 +0000 UTC m=+0.166119706 container start 23d1d54a2c14fea6dbb6fa0cdd0ce1bc1e2445f46f0fcc5dfaddd218f4d6da3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_feistel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:36:56 compute-0 podman[159964]: 2026-01-21 23:36:56.688003044 +0000 UTC m=+0.169472726 container attach 23d1d54a2c14fea6dbb6fa0cdd0ce1bc1e2445f46f0fcc5dfaddd218f4d6da3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_feistel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:36:57 compute-0 ceph-mon[74318]: pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 144 op/s
Jan 21 23:36:57 compute-0 python3.9[160082]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:36:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:36:57 compute-0 silly_feistel[159980]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:36:57 compute-0 silly_feistel[159980]: --> relative data size: 1.0
Jan 21 23:36:57 compute-0 silly_feistel[159980]: --> All data devices are unavailable
Jan 21 23:36:57 compute-0 systemd[1]: libpod-23d1d54a2c14fea6dbb6fa0cdd0ce1bc1e2445f46f0fcc5dfaddd218f4d6da3f.scope: Deactivated successfully.
Jan 21 23:36:57 compute-0 podman[159964]: 2026-01-21 23:36:57.55892415 +0000 UTC m=+1.040393822 container died 23d1d54a2c14fea6dbb6fa0cdd0ce1bc1e2445f46f0fcc5dfaddd218f4d6da3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_feistel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 21 23:36:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:57.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bf48bb7e76b43e7024d1c192bdbb03fe681aa1426e5e691e4a086e6edce6e4f-merged.mount: Deactivated successfully.
Jan 21 23:36:57 compute-0 podman[159964]: 2026-01-21 23:36:57.630211333 +0000 UTC m=+1.111681005 container remove 23d1d54a2c14fea6dbb6fa0cdd0ce1bc1e2445f46f0fcc5dfaddd218f4d6da3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 23:36:57 compute-0 systemd[1]: libpod-conmon-23d1d54a2c14fea6dbb6fa0cdd0ce1bc1e2445f46f0fcc5dfaddd218f4d6da3f.scope: Deactivated successfully.
Jan 21 23:36:57 compute-0 sudo[159804]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:57 compute-0 sudo[160112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:36:57 compute-0 sudo[160112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:57 compute-0 sudo[160112]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:57 compute-0 sudo[160161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:36:57 compute-0 sudo[160161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:57 compute-0 sudo[160161]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:57 compute-0 sudo[160186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:36:57 compute-0 sudo[160186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:57 compute-0 sudo[160186]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:57 compute-0 sudo[160211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:36:57 compute-0 sudo[160211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 144 op/s
Jan 21 23:36:58 compute-0 podman[160350]: 2026-01-21 23:36:58.363691848 +0000 UTC m=+0.043300180 container create f45c98606b99feef796ed593b34733ce46448e5e467905d60e1674870f728105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_antonelli, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 21 23:36:58 compute-0 ceph-mon[74318]: pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 144 op/s
Jan 21 23:36:58 compute-0 systemd[1]: Started libpod-conmon-f45c98606b99feef796ed593b34733ce46448e5e467905d60e1674870f728105.scope.
Jan 21 23:36:58 compute-0 podman[160350]: 2026-01-21 23:36:58.341055144 +0000 UTC m=+0.020663476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:36:58 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:36:58 compute-0 podman[160350]: 2026-01-21 23:36:58.504114839 +0000 UTC m=+0.183723241 container init f45c98606b99feef796ed593b34733ce46448e5e467905d60e1674870f728105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:36:58 compute-0 podman[160350]: 2026-01-21 23:36:58.515767176 +0000 UTC m=+0.195375528 container start f45c98606b99feef796ed593b34733ce46448e5e467905d60e1674870f728105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_antonelli, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:36:58 compute-0 podman[160350]: 2026-01-21 23:36:58.520056953 +0000 UTC m=+0.199665315 container attach f45c98606b99feef796ed593b34733ce46448e5e467905d60e1674870f728105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:36:58 compute-0 quirky_antonelli[160392]: 167 167
Jan 21 23:36:58 compute-0 systemd[1]: libpod-f45c98606b99feef796ed593b34733ce46448e5e467905d60e1674870f728105.scope: Deactivated successfully.
Jan 21 23:36:58 compute-0 podman[160350]: 2026-01-21 23:36:58.522061143 +0000 UTC m=+0.201669455 container died f45c98606b99feef796ed593b34733ce46448e5e467905d60e1674870f728105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_antonelli, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:36:58 compute-0 sudo[160395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:36:58 compute-0 sudo[160395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:58 compute-0 sudo[160395]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:58 compute-0 sudo[160445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbrprqpxujkbetpcpzvowslgqjohzynb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038617.9770021-62-29167780951959/AnsiballZ_command.py'
Jan 21 23:36:58 compute-0 sudo[160445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aca99ad0fc45905b5bdfad803d51ac5f3de95023683f0315e21dffc3da2fafe-merged.mount: Deactivated successfully.
Jan 21 23:36:58 compute-0 podman[160350]: 2026-01-21 23:36:58.565678902 +0000 UTC m=+0.245287224 container remove f45c98606b99feef796ed593b34733ce46448e5e467905d60e1674870f728105 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 21 23:36:58 compute-0 systemd[1]: libpod-conmon-f45c98606b99feef796ed593b34733ce46448e5e467905d60e1674870f728105.scope: Deactivated successfully.
Jan 21 23:36:58 compute-0 sudo[160458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:36:58 compute-0 sudo[160458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:58 compute-0 sudo[160458]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:36:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:36:58.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:36:58 compute-0 podman[160495]: 2026-01-21 23:36:58.744258358 +0000 UTC m=+0.050144924 container create fb276f05bea391149a52a9e4282c32c842c35f0f738ade45958c0c98184b7fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dijkstra, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:36:58 compute-0 python3.9[160459]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:36:58 compute-0 systemd[1]: Started libpod-conmon-fb276f05bea391149a52a9e4282c32c842c35f0f738ade45958c0c98184b7fba.scope.
Jan 21 23:36:58 compute-0 podman[160495]: 2026-01-21 23:36:58.721968374 +0000 UTC m=+0.027854940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:36:58 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:36:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a22c4f9cc552649fcc465f13e565b649b25367f62a1f0d479cd9af8500f99434/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:36:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a22c4f9cc552649fcc465f13e565b649b25367f62a1f0d479cd9af8500f99434/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:36:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a22c4f9cc552649fcc465f13e565b649b25367f62a1f0d479cd9af8500f99434/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:36:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a22c4f9cc552649fcc465f13e565b649b25367f62a1f0d479cd9af8500f99434/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:36:58 compute-0 sudo[160445]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:58 compute-0 podman[160495]: 2026-01-21 23:36:58.85081154 +0000 UTC m=+0.156698116 container init fb276f05bea391149a52a9e4282c32c842c35f0f738ade45958c0c98184b7fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 23:36:58 compute-0 podman[160495]: 2026-01-21 23:36:58.86996333 +0000 UTC m=+0.175849906 container start fb276f05bea391149a52a9e4282c32c842c35f0f738ade45958c0c98184b7fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 23:36:58 compute-0 podman[160495]: 2026-01-21 23:36:58.873480175 +0000 UTC m=+0.179366751 container attach fb276f05bea391149a52a9e4282c32c842c35f0f738ade45958c0c98184b7fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dijkstra, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 21 23:36:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:36:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:36:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:36:59.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]: {
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:     "1": [
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:         {
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:             "devices": [
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:                 "/dev/loop3"
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:             ],
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:             "lv_name": "ceph_lv0",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:             "lv_size": "7511998464",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:             "name": "ceph_lv0",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:             "tags": {
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:                 "ceph.cluster_name": "ceph",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:                 "ceph.crush_device_class": "",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:                 "ceph.encrypted": "0",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:                 "ceph.osd_id": "1",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:                 "ceph.type": "block",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:                 "ceph.vdo": "0"
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:             },
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:             "type": "block",
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:             "vg_name": "ceph_vg0"
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:         }
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]:     ]
Jan 21 23:36:59 compute-0 intelligent_dijkstra[160522]: }
Jan 21 23:36:59 compute-0 systemd[1]: libpod-fb276f05bea391149a52a9e4282c32c842c35f0f738ade45958c0c98184b7fba.scope: Deactivated successfully.
Jan 21 23:36:59 compute-0 podman[160495]: 2026-01-21 23:36:59.707291667 +0000 UTC m=+1.013178263 container died fb276f05bea391149a52a9e4282c32c842c35f0f738ade45958c0c98184b7fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dijkstra, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Jan 21 23:36:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-a22c4f9cc552649fcc465f13e565b649b25367f62a1f0d479cd9af8500f99434-merged.mount: Deactivated successfully.
Jan 21 23:36:59 compute-0 podman[160495]: 2026-01-21 23:36:59.783069963 +0000 UTC m=+1.088956539 container remove fb276f05bea391149a52a9e4282c32c842c35f0f738ade45958c0c98184b7fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dijkstra, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 21 23:36:59 compute-0 systemd[1]: libpod-conmon-fb276f05bea391149a52a9e4282c32c842c35f0f738ade45958c0c98184b7fba.scope: Deactivated successfully.
Jan 21 23:36:59 compute-0 sudo[160211]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:59 compute-0 podman[160616]: 2026-01-21 23:36:59.871056342 +0000 UTC m=+0.128443785 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 21 23:36:59 compute-0 sudo[160686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:36:59 compute-0 sudo[160686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:59 compute-0 sudo[160686]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:59 compute-0 sudo[160743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozmxcwqfppzdtngzxhuqpzivwvwpczav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038619.2503524-95-188446841819235/AnsiballZ_systemd_service.py'
Jan 21 23:36:59 compute-0 sudo[160743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:36:59 compute-0 sudo[160745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:36:59 compute-0 sudo[160745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:36:59 compute-0 sudo[160745]: pam_unix(sudo:session): session closed for user root
Jan 21 23:36:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 144 op/s
Jan 21 23:36:59 compute-0 sudo[160772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:37:00 compute-0 sudo[160772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:37:00 compute-0 sudo[160772]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:00 compute-0 sudo[160797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:37:00 compute-0 sudo[160797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:37:00 compute-0 python3.9[160750]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 23:37:00 compute-0 systemd[1]: Reloading.
Jan 21 23:37:00 compute-0 systemd-rc-local-generator[160865]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:37:00 compute-0 systemd-sysv-generator[160870]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:37:00 compute-0 podman[160899]: 2026-01-21 23:37:00.489187034 +0000 UTC m=+0.066879072 container create 552dad308a37bde44ee62081d7ad05df13e59ab7c4a64f6b3acee9292ead410c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 21 23:37:00 compute-0 systemd[1]: Started libpod-conmon-552dad308a37bde44ee62081d7ad05df13e59ab7c4a64f6b3acee9292ead410c.scope.
Jan 21 23:37:00 compute-0 sudo[160743]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:00 compute-0 podman[160899]: 2026-01-21 23:37:00.454835711 +0000 UTC m=+0.032527829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:37:00 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:37:00 compute-0 podman[160899]: 2026-01-21 23:37:00.580861763 +0000 UTC m=+0.158553811 container init 552dad308a37bde44ee62081d7ad05df13e59ab7c4a64f6b3acee9292ead410c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:37:00 compute-0 podman[160899]: 2026-01-21 23:37:00.594309483 +0000 UTC m=+0.172001541 container start 552dad308a37bde44ee62081d7ad05df13e59ab7c4a64f6b3acee9292ead410c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:37:00 compute-0 podman[160899]: 2026-01-21 23:37:00.598535149 +0000 UTC m=+0.176227237 container attach 552dad308a37bde44ee62081d7ad05df13e59ab7c4a64f6b3acee9292ead410c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:37:00 compute-0 charming_napier[160915]: 167 167
Jan 21 23:37:00 compute-0 systemd[1]: libpod-552dad308a37bde44ee62081d7ad05df13e59ab7c4a64f6b3acee9292ead410c.scope: Deactivated successfully.
Jan 21 23:37:00 compute-0 podman[160899]: 2026-01-21 23:37:00.605715342 +0000 UTC m=+0.183407410 container died 552dad308a37bde44ee62081d7ad05df13e59ab7c4a64f6b3acee9292ead410c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_napier, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Jan 21 23:37:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:37:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:00.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:37:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5e14b532f3c963922461966c6a50376991f15e10169007e0570bed3d276c36c-merged.mount: Deactivated successfully.
Jan 21 23:37:00 compute-0 podman[160899]: 2026-01-21 23:37:00.650671771 +0000 UTC m=+0.228363809 container remove 552dad308a37bde44ee62081d7ad05df13e59ab7c4a64f6b3acee9292ead410c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_napier, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 21 23:37:00 compute-0 systemd[1]: libpod-conmon-552dad308a37bde44ee62081d7ad05df13e59ab7c4a64f6b3acee9292ead410c.scope: Deactivated successfully.
Jan 21 23:37:00 compute-0 podman[160986]: 2026-01-21 23:37:00.873516625 +0000 UTC m=+0.075427447 container create aeb61bc33e275821b640695e4f1510b80366363f4dac91864371b62de92601ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 21 23:37:00 compute-0 systemd[1]: Started libpod-conmon-aeb61bc33e275821b640695e4f1510b80366363f4dac91864371b62de92601ae.scope.
Jan 21 23:37:00 compute-0 podman[160986]: 2026-01-21 23:37:00.84512729 +0000 UTC m=+0.047038132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:37:00 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3208490d5973c903dc4c9b361d2bf370c4f5ccd139c0b12bb2df8b6491f388e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3208490d5973c903dc4c9b361d2bf370c4f5ccd139c0b12bb2df8b6491f388e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3208490d5973c903dc4c9b361d2bf370c4f5ccd139c0b12bb2df8b6491f388e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3208490d5973c903dc4c9b361d2bf370c4f5ccd139c0b12bb2df8b6491f388e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:37:00 compute-0 podman[160986]: 2026-01-21 23:37:00.978200071 +0000 UTC m=+0.180110943 container init aeb61bc33e275821b640695e4f1510b80366363f4dac91864371b62de92601ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 23:37:00 compute-0 podman[160986]: 2026-01-21 23:37:00.992006902 +0000 UTC m=+0.193917694 container start aeb61bc33e275821b640695e4f1510b80366363f4dac91864371b62de92601ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 21 23:37:00 compute-0 podman[160986]: 2026-01-21 23:37:00.995988531 +0000 UTC m=+0.197899353 container attach aeb61bc33e275821b640695e4f1510b80366363f4dac91864371b62de92601ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 21 23:37:01 compute-0 ceph-mon[74318]: pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 144 op/s
Jan 21 23:37:01 compute-0 python3.9[161109]: ansible-ansible.builtin.service_facts Invoked
Jan 21 23:37:01 compute-0 network[161127]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 23:37:01 compute-0 network[161128]: 'network-scripts' will be removed from distribution in near future.
Jan 21 23:37:01 compute-0 network[161129]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 23:37:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:01.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Jan 21 23:37:02 compute-0 sleepy_swartz[161031]: {
Jan 21 23:37:02 compute-0 sleepy_swartz[161031]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:37:02 compute-0 sleepy_swartz[161031]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:37:02 compute-0 sleepy_swartz[161031]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:37:02 compute-0 sleepy_swartz[161031]:         "osd_id": 1,
Jan 21 23:37:02 compute-0 sleepy_swartz[161031]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:37:02 compute-0 sleepy_swartz[161031]:         "type": "bluestore"
Jan 21 23:37:02 compute-0 sleepy_swartz[161031]:     }
Jan 21 23:37:02 compute-0 sleepy_swartz[161031]: }
Jan 21 23:37:02 compute-0 podman[160986]: 2026-01-21 23:37:02.070122877 +0000 UTC m=+1.272033659 container died aeb61bc33e275821b640695e4f1510b80366363f4dac91864371b62de92601ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 23:37:02 compute-0 systemd[1]: libpod-aeb61bc33e275821b640695e4f1510b80366363f4dac91864371b62de92601ae.scope: Deactivated successfully.
Jan 21 23:37:02 compute-0 systemd[1]: libpod-aeb61bc33e275821b640695e4f1510b80366363f4dac91864371b62de92601ae.scope: Consumed 1.083s CPU time.
Jan 21 23:37:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3208490d5973c903dc4c9b361d2bf370c4f5ccd139c0b12bb2df8b6491f388e-merged.mount: Deactivated successfully.
Jan 21 23:37:02 compute-0 podman[160986]: 2026-01-21 23:37:02.320828551 +0000 UTC m=+1.522739333 container remove aeb61bc33e275821b640695e4f1510b80366363f4dac91864371b62de92601ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:37:02 compute-0 systemd[1]: libpod-conmon-aeb61bc33e275821b640695e4f1510b80366363f4dac91864371b62de92601ae.scope: Deactivated successfully.
Jan 21 23:37:02 compute-0 sudo[160797]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:37:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:37:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:37:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:37:02 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev f177163f-bb65-4376-9717-5379107f8458 does not exist
Jan 21 23:37:02 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 993a8681-7b5f-4bae-b0b4-38c130bb4f81 does not exist
Jan 21 23:37:02 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev cabec6e9-a246-4770-b164-0fb6e5a2db19 does not exist
Jan 21 23:37:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:37:02 compute-0 sudo[161169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:37:02 compute-0 sudo[161169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:37:02 compute-0 sudo[161169]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:02 compute-0 sudo[161197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:37:02 compute-0 sudo[161197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:37:02 compute-0 sudo[161197]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:37:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:02.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:37:03 compute-0 ceph-mon[74318]: pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Jan 21 23:37:03 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:37:03 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:37:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:03.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Jan 21 23:37:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:37:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:04.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:37:05 compute-0 ceph-mon[74318]: pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Jan 21 23:37:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:37:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:05.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:37:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Jan 21 23:37:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:37:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:06.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:37:06 compute-0 sudo[161469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdripnjivokewwwpcktppmokyrhtbunb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038626.4211228-152-42190897415237/AnsiballZ_systemd_service.py'
Jan 21 23:37:06 compute-0 sudo[161469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:07 compute-0 python3.9[161471]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:37:07 compute-0 sudo[161469]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:07 compute-0 ceph-mon[74318]: pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Jan 21 23:37:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:37:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:07.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:07 compute-0 sudo[161623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atyhhrolwmvgvkelnceafxvukhsqvpgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038627.2920105-152-160120958585188/AnsiballZ_systemd_service.py'
Jan 21 23:37:07 compute-0 sudo[161623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:07 compute-0 python3.9[161625]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:37:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:08 compute-0 sudo[161623]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:08 compute-0 sudo[161776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cycxsmnubbfbrzcyjsdkjbgeoxnnfjhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038628.172755-152-205279626584321/AnsiballZ_systemd_service.py'
Jan 21 23:37:08 compute-0 sudo[161776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:08.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:08 compute-0 python3.9[161778]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:37:08 compute-0 sudo[161776]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:09 compute-0 ceph-mon[74318]: pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:37:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:37:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:37:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:37:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:37:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:37:09 compute-0 sudo[161930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bytmsxfnfshteyqzngoqstycdwvwdfid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038629.032306-152-3366367417734/AnsiballZ_systemd_service.py'
Jan 21 23:37:09 compute-0 sudo[161930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:09.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:09 compute-0 python3.9[161932]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:37:09 compute-0 sudo[161930]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:10 compute-0 sudo[162083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iitcxpfnavaldfosgihrotbproebdwia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038629.86262-152-103256180539132/AnsiballZ_systemd_service.py'
Jan 21 23:37:10 compute-0 sudo[162083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:10 compute-0 python3.9[162085]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:37:10 compute-0 sudo[162083]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:10.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:11 compute-0 sudo[162236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyqwfqzqdyrpukogjpanaeioqljljahd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038630.6834164-152-99555425242325/AnsiballZ_systemd_service.py'
Jan 21 23:37:11 compute-0 sudo[162236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:11 compute-0 python3.9[162238]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:37:11 compute-0 sudo[162236]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:11.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:11 compute-0 sudo[162390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arfketgxmfjjkruuqyvrhavmbgyhykvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038631.5630367-152-31126334362983/AnsiballZ_systemd_service.py'
Jan 21 23:37:11 compute-0 sudo[162390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:12 compute-0 ceph-mon[74318]: pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:37:12 compute-0 python3.9[162392]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:37:12 compute-0 sudo[162390]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:37:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:12.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:37:13 compute-0 ceph-mon[74318]: pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:13.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:14 compute-0 sudo[162544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xobemqdojqlaaztkoylzsabdszrtlsxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038633.8664525-308-126532312218114/AnsiballZ_file.py'
Jan 21 23:37:14 compute-0 sudo[162544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:14 compute-0 python3.9[162546]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:37:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:37:14 compute-0 sudo[162544]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:14.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:37:15 compute-0 sudo[162696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdfmqnqyhlwkxrarzzgfnsxdfwirbntc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038634.8316054-308-84950212088259/AnsiballZ_file.py'
Jan 21 23:37:15 compute-0 sudo[162696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:15 compute-0 python3.9[162698]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:37:15 compute-0 sudo[162696]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:15 compute-0 ceph-mon[74318]: pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:15.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:15 compute-0 sudo[162849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atfhyperqtgowaeedvcdhhylixqzwvgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038635.4582758-308-36672580948936/AnsiballZ_file.py'
Jan 21 23:37:15 compute-0 sudo[162849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:16 compute-0 python3.9[162851]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:37:16 compute-0 sudo[162849]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:16 compute-0 ceph-mon[74318]: pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:37:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:16.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:37:16 compute-0 sudo[163001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suspujpixwahdlxehdcoyogcdnlumfhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038636.2320082-308-81018510747386/AnsiballZ_file.py'
Jan 21 23:37:16 compute-0 sudo[163001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:16 compute-0 python3.9[163003]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:37:16 compute-0 sudo[163001]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:16 compute-0 podman[163004]: 2026-01-21 23:37:16.985825333 +0000 UTC m=+0.097844695 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 21 23:37:17 compute-0 sudo[163173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkanrqmytuthmiwjgmnsimrnwrkjzjej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038637.0112264-308-19794729944338/AnsiballZ_file.py'
Jan 21 23:37:17 compute-0 sudo[163173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:37:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:17.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:17 compute-0 python3.9[163175]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:37:17 compute-0 sudo[163173]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:18 compute-0 sudo[163325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvfflypnhyuuieadrsefkserrlquywgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038637.8123002-308-209006593160781/AnsiballZ_file.py'
Jan 21 23:37:18 compute-0 sudo[163325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:18 compute-0 python3.9[163327]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:37:18 compute-0 sudo[163325]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:37:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:18.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:37:18 compute-0 sudo[163404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:37:18 compute-0 sudo[163404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:37:18 compute-0 sudo[163404]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:18 compute-0 sudo[163447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:37:18 compute-0 sudo[163447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:37:18 compute-0 sudo[163447]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:18 compute-0 sudo[163527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owwuhvytqxoagbfvqcxkidyiqbjcxbyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038638.546266-308-36270316318052/AnsiballZ_file.py'
Jan 21 23:37:18 compute-0 sudo[163527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:19 compute-0 ceph-mon[74318]: pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:19 compute-0 python3.9[163529]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:37:19 compute-0 sudo[163527]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:19.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:19 compute-0 sudo[163680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzlpfnbvxiblziibpvqjeunkwhlzumju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038639.3416646-458-211819589286785/AnsiballZ_file.py'
Jan 21 23:37:19 compute-0 sudo[163680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:20 compute-0 python3.9[163682]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:37:20 compute-0 sudo[163680]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:37:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:20.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:37:20 compute-0 sudo[163832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hisvgileankdhlzynikmmgssqqhyqbcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038640.3679025-458-55837714601143/AnsiballZ_file.py'
Jan 21 23:37:20 compute-0 sudo[163832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:20 compute-0 python3.9[163834]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:37:21 compute-0 sudo[163832]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:21 compute-0 ceph-mon[74318]: pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:21 compute-0 sudo[163985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmnmptirnvsbucvyqxkylctcwluobqgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038641.1458547-458-219890300490972/AnsiballZ_file.py'
Jan 21 23:37:21 compute-0 sudo[163985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:21.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:21 compute-0 python3.9[163987]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:37:21 compute-0 sudo[163985]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:22 compute-0 sudo[164137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlrpnxsfsxdtfaxxirogsahnsquwewfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038641.8881948-458-155043848092273/AnsiballZ_file.py'
Jan 21 23:37:22 compute-0 sudo[164137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:37:22 compute-0 python3.9[164139]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:37:22 compute-0 sudo[164137]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:22.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:22 compute-0 sudo[164289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcuvpjaabghjrcseiuzbvgipdfmrmzln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038642.660699-458-271976883412504/AnsiballZ_file.py'
Jan 21 23:37:22 compute-0 sudo[164289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:23 compute-0 ceph-mon[74318]: pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:23 compute-0 python3.9[164291]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:37:23 compute-0 sudo[164289]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:23.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:23 compute-0 sudo[164442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuqssqfoafygzyzanlmtezkzsdcrisfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038643.439479-458-229591967550312/AnsiballZ_file.py'
Jan 21 23:37:23 compute-0 sudo[164442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:23 compute-0 python3.9[164444]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:37:23 compute-0 sudo[164442]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:24 compute-0 sudo[164594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpiiojruzegbaxnqztkymzmmcbhaaaxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038644.1280146-458-236538022310031/AnsiballZ_file.py'
Jan 21 23:37:24 compute-0 sudo[164594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:37:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:24.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:37:24 compute-0 python3.9[164596]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:37:24 compute-0 sudo[164594]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:25 compute-0 ceph-mon[74318]: pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:25 compute-0 sudo[164747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjdyjijqpjowmxgflekraqrpktynqaws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038645.0197725-611-37232502810981/AnsiballZ_command.py'
Jan 21 23:37:25 compute-0 sudo[164747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:25 compute-0 python3.9[164749]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:37:25 compute-0 sudo[164747]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:25.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:26 compute-0 python3.9[164901]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 23:37:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:26.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:27 compute-0 ceph-mon[74318]: pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:27 compute-0 sudo[165051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdapqzvabznvrckxahyhxhkrucvqsfio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038646.8966053-665-249427187791367/AnsiballZ_systemd_service.py'
Jan 21 23:37:27 compute-0 sudo[165051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:37:27 compute-0 python3.9[165053]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 23:37:27 compute-0 systemd[1]: Reloading.
Jan 21 23:37:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:27.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:27 compute-0 systemd-rc-local-generator[165080]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:37:27 compute-0 systemd-sysv-generator[165087]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:37:27 compute-0 sudo[165051]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:28.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:29 compute-0 ceph-mon[74318]: pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:29 compute-0 sudo[165241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhxagjqhumrhawdzlkxcgvtzfjuvmkeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038649.0184717-689-157685755108644/AnsiballZ_command.py'
Jan 21 23:37:29 compute-0 sudo[165241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:29 compute-0 python3.9[165243]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:37:29 compute-0 sudo[165241]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:29.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:30 compute-0 sudo[165405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgqkwzunnhljolpuftrnynygmxkohrmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038649.760934-689-131588078403592/AnsiballZ_command.py'
Jan 21 23:37:30 compute-0 sudo[165405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:30 compute-0 podman[165368]: 2026-01-21 23:37:30.150466236 +0000 UTC m=+0.132189685 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 23:37:30 compute-0 python3.9[165413]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:37:30 compute-0 sudo[165405]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:30.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:30 compute-0 sudo[165573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwxtdagxtmveiqitpqtrndxwkqzbifag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038650.4738963-689-264121949904741/AnsiballZ_command.py'
Jan 21 23:37:30 compute-0 sudo[165573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:31 compute-0 python3.9[165575]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:37:31 compute-0 ceph-mon[74318]: pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:31 compute-0 sudo[165573]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:31 compute-0 sudo[165727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivrymtqfsbhwpiookxfpyscttcgibmuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038651.296145-689-89290632363080/AnsiballZ_command.py'
Jan 21 23:37:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:31.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:31 compute-0 sudo[165727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:31 compute-0 python3.9[165729]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:37:31 compute-0 sudo[165727]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:32 compute-0 sudo[165880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjfcelauntfdnbkxpqctsfikueznwlen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038652.0073416-689-50950518323714/AnsiballZ_command.py'
Jan 21 23:37:32 compute-0 sudo[165880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:37:32 compute-0 python3.9[165882]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:37:32 compute-0 sudo[165880]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:37:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:32.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:37:33 compute-0 sudo[166033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfybcugiofeepshsugzbjfndopfnvvgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038652.770278-689-16865565534378/AnsiballZ_command.py'
Jan 21 23:37:33 compute-0 sudo[166033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:33 compute-0 ceph-mon[74318]: pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:33 compute-0 python3.9[166035]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:37:33 compute-0 sudo[166033]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:33.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:33 compute-0 sudo[166187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jchenzyluvsnnjjgyhdxbazgydooujnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038653.4836104-689-269002157461965/AnsiballZ_command.py'
Jan 21 23:37:33 compute-0 sudo[166187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:33 compute-0 python3.9[166189]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:37:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:34 compute-0 sudo[166187]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:37:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:34.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:37:35 compute-0 ceph-mon[74318]: pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:35 compute-0 sudo[166340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxfjltlbtocyrzewikhccjjwwjakjibb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038654.7228067-851-273177015970607/AnsiballZ_getent.py'
Jan 21 23:37:35 compute-0 sudo[166340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:35 compute-0 python3.9[166342]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 21 23:37:35 compute-0 sudo[166340]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:35.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:36 compute-0 sudo[166494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiuztexdzumlbtesakqxomwbyclkdfyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038655.6738732-875-176589458921950/AnsiballZ_group.py'
Jan 21 23:37:36 compute-0 sudo[166494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:36 compute-0 python3.9[166496]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 23:37:36 compute-0 groupadd[166497]: group added to /etc/group: name=libvirt, GID=42473
Jan 21 23:37:36 compute-0 groupadd[166497]: group added to /etc/gshadow: name=libvirt
Jan 21 23:37:36 compute-0 groupadd[166497]: new group: name=libvirt, GID=42473
Jan 21 23:37:36 compute-0 sudo[166494]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:36.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:37 compute-0 ceph-mon[74318]: pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:37 compute-0 sudo[166652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyynbdfvwoadfpqmmrdsoypjvlwrtdhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038656.6008904-899-252727970300524/AnsiballZ_user.py'
Jan 21 23:37:37 compute-0 sudo[166652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:37:37 compute-0 python3.9[166654]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 21 23:37:37 compute-0 useradd[166657]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 21 23:37:37 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 23:37:37 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 23:37:37 compute-0 sudo[166652]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:37.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:38 compute-0 sudo[166814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kifndsqxmavzoailqozahhbzazpxnyda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038657.9834137-932-3803237216186/AnsiballZ_setup.py'
Jan 21 23:37:38 compute-0 sudo[166814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:37:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:38.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:37:38 compute-0 python3.9[166816]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:37:38 compute-0 sudo[166824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:37:38 compute-0 sudo[166824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:37:38 compute-0 sudo[166824]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:38 compute-0 sudo[166849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:37:38 compute-0 sudo[166849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:37:38 compute-0 sudo[166849]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:38 compute-0 sudo[166814]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:39 compute-0 ceph-mon[74318]: pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:37:39
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', '.mgr', 'images', '.rgw.root', 'default.rgw.log']
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:37:39 compute-0 sudo[166949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdkykidkdnljcofomqjsefajdwarzpwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038657.9834137-932-3803237216186/AnsiballZ_dnf.py'
Jan 21 23:37:39 compute-0 sudo[166949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:37:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:39.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:39 compute-0 python3.9[166951]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:37:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:40.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:41 compute-0 ceph-mon[74318]: pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:37:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:41.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:37:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:37:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:42.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:43 compute-0 ceph-mon[74318]: pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:43.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:44.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:45 compute-0 ceph-mon[74318]: pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:45.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:46.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:47 compute-0 ceph-mon[74318]: pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:37:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:37:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:47.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:37:47 compute-0 podman[167033]: 2026-01-21 23:37:47.940699072 +0000 UTC m=+0.056411249 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 23:37:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:48.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:37:48.729 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:37:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:37:48.733 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:37:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:37:48.734 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:37:49 compute-0 ceph-mon[74318]: pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:49.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:50.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:51 compute-0 ceph-mon[74318]: pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:51.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:37:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:52.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:53 compute-0 ceph-mon[74318]: pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:53.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:37:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:37:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:54.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:55 compute-0 ceph-mon[74318]: pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:55.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:56 compute-0 ceph-mon[74318]: pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:56.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:37:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:57.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:37:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:37:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:37:58.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:37:58 compute-0 sudo[167172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:37:58 compute-0 sudo[167172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:37:58 compute-0 sudo[167172]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:59 compute-0 ceph-mon[74318]: pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:37:59 compute-0 sudo[167197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:37:59 compute-0 sudo[167197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:37:59 compute-0 sudo[167197]: pam_unix(sudo:session): session closed for user root
Jan 21 23:37:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:37:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:37:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:37:59.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:00.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:01 compute-0 podman[167223]: 2026-01-21 23:38:01.028382154 +0000 UTC m=+0.124783180 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 21 23:38:01 compute-0 ceph-mon[74318]: pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:01.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:38:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:38:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:02.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:38:02 compute-0 sudo[167250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:38:02 compute-0 sudo[167250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:02 compute-0 sudo[167250]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:03 compute-0 sudo[167275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:38:03 compute-0 sudo[167275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:03 compute-0 sudo[167275]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:03 compute-0 ceph-mon[74318]: pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:03 compute-0 sudo[167300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:38:03 compute-0 sudo[167300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:03 compute-0 sudo[167300]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:03 compute-0 sudo[167325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:38:03 compute-0 sudo[167325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:03.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:03 compute-0 sudo[167325]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:38:03 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:38:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:38:03 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:38:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:38:03 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:38:03 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 48471ad5-af5a-4017-b492-e79687a8d467 does not exist
Jan 21 23:38:03 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 9371955a-446a-4aa9-92b9-ad7ea2d0e7c4 does not exist
Jan 21 23:38:03 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev eb31c06e-e6f2-4f9d-9b0b-126471989840 does not exist
Jan 21 23:38:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:38:03 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:38:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:38:03 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:38:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:38:03 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:38:03 compute-0 sudo[167381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:38:03 compute-0 sudo[167381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:03 compute-0 sudo[167381]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:04 compute-0 sudo[167406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:38:04 compute-0 sudo[167406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:04 compute-0 sudo[167406]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:04 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:38:04 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:38:04 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:38:04 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:38:04 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:38:04 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:38:04 compute-0 sudo[167431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:38:04 compute-0 sudo[167431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:04 compute-0 sudo[167431]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:04 compute-0 sudo[167456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:38:04 compute-0 sudo[167456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:04.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:04 compute-0 podman[167521]: 2026-01-21 23:38:04.683275608 +0000 UTC m=+0.059649788 container create 46aab393fb0a46d25a592a5478918310562ee3885ea9a9e7b4c8d22bf58f6b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 23:38:04 compute-0 systemd[1]: Started libpod-conmon-46aab393fb0a46d25a592a5478918310562ee3885ea9a9e7b4c8d22bf58f6b36.scope.
Jan 21 23:38:04 compute-0 podman[167521]: 2026-01-21 23:38:04.652625799 +0000 UTC m=+0.029000039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:38:04 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:38:04 compute-0 podman[167521]: 2026-01-21 23:38:04.773996765 +0000 UTC m=+0.150370985 container init 46aab393fb0a46d25a592a5478918310562ee3885ea9a9e7b4c8d22bf58f6b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:38:04 compute-0 podman[167521]: 2026-01-21 23:38:04.782270736 +0000 UTC m=+0.158644886 container start 46aab393fb0a46d25a592a5478918310562ee3885ea9a9e7b4c8d22bf58f6b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 23:38:04 compute-0 podman[167521]: 2026-01-21 23:38:04.78570435 +0000 UTC m=+0.162078540 container attach 46aab393fb0a46d25a592a5478918310562ee3885ea9a9e7b4c8d22bf58f6b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:38:04 compute-0 practical_cohen[167538]: 167 167
Jan 21 23:38:04 compute-0 systemd[1]: libpod-46aab393fb0a46d25a592a5478918310562ee3885ea9a9e7b4c8d22bf58f6b36.scope: Deactivated successfully.
Jan 21 23:38:04 compute-0 podman[167521]: 2026-01-21 23:38:04.790327741 +0000 UTC m=+0.166701931 container died 46aab393fb0a46d25a592a5478918310562ee3885ea9a9e7b4c8d22bf58f6b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 23:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-03946de059e263e04b7b97a94b1935eaaacc95c0a15404095f77f4652a22ab6c-merged.mount: Deactivated successfully.
Jan 21 23:38:04 compute-0 podman[167521]: 2026-01-21 23:38:04.879199042 +0000 UTC m=+0.255573222 container remove 46aab393fb0a46d25a592a5478918310562ee3885ea9a9e7b4c8d22bf58f6b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 23:38:04 compute-0 systemd[1]: libpod-conmon-46aab393fb0a46d25a592a5478918310562ee3885ea9a9e7b4c8d22bf58f6b36.scope: Deactivated successfully.
Jan 21 23:38:05 compute-0 podman[167562]: 2026-01-21 23:38:05.065759412 +0000 UTC m=+0.056666277 container create cda0e006429208a5a7e786ad7912d9d127511d1c068942b2661a80af704edcd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chandrasekhar, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 23:38:05 compute-0 systemd[1]: Started libpod-conmon-cda0e006429208a5a7e786ad7912d9d127511d1c068942b2661a80af704edcd2.scope.
Jan 21 23:38:05 compute-0 ceph-mon[74318]: pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:05 compute-0 podman[167562]: 2026-01-21 23:38:05.042928551 +0000 UTC m=+0.033835496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:38:05 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a19a9aa76f57b81eb7a424a26f89d2dc310644ec69711a6d9541bc4b841339a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a19a9aa76f57b81eb7a424a26f89d2dc310644ec69711a6d9541bc4b841339a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a19a9aa76f57b81eb7a424a26f89d2dc310644ec69711a6d9541bc4b841339a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a19a9aa76f57b81eb7a424a26f89d2dc310644ec69711a6d9541bc4b841339a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a19a9aa76f57b81eb7a424a26f89d2dc310644ec69711a6d9541bc4b841339a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:38:05 compute-0 podman[167562]: 2026-01-21 23:38:05.173374002 +0000 UTC m=+0.164280937 container init cda0e006429208a5a7e786ad7912d9d127511d1c068942b2661a80af704edcd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chandrasekhar, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 23:38:05 compute-0 podman[167562]: 2026-01-21 23:38:05.18619768 +0000 UTC m=+0.177104575 container start cda0e006429208a5a7e786ad7912d9d127511d1c068942b2661a80af704edcd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chandrasekhar, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:38:05 compute-0 podman[167562]: 2026-01-21 23:38:05.190747109 +0000 UTC m=+0.181654084 container attach cda0e006429208a5a7e786ad7912d9d127511d1c068942b2661a80af704edcd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:38:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:05.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:06 compute-0 gifted_chandrasekhar[167579]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:38:06 compute-0 gifted_chandrasekhar[167579]: --> relative data size: 1.0
Jan 21 23:38:06 compute-0 gifted_chandrasekhar[167579]: --> All data devices are unavailable
Jan 21 23:38:06 compute-0 systemd[1]: libpod-cda0e006429208a5a7e786ad7912d9d127511d1c068942b2661a80af704edcd2.scope: Deactivated successfully.
Jan 21 23:38:06 compute-0 podman[167562]: 2026-01-21 23:38:06.068949038 +0000 UTC m=+1.059855963 container died cda0e006429208a5a7e786ad7912d9d127511d1c068942b2661a80af704edcd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:38:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a19a9aa76f57b81eb7a424a26f89d2dc310644ec69711a6d9541bc4b841339a-merged.mount: Deactivated successfully.
Jan 21 23:38:06 compute-0 podman[167562]: 2026-01-21 23:38:06.170148904 +0000 UTC m=+1.161055779 container remove cda0e006429208a5a7e786ad7912d9d127511d1c068942b2661a80af704edcd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chandrasekhar, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 23:38:06 compute-0 systemd[1]: libpod-conmon-cda0e006429208a5a7e786ad7912d9d127511d1c068942b2661a80af704edcd2.scope: Deactivated successfully.
Jan 21 23:38:06 compute-0 sudo[167456]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:06 compute-0 sudo[167610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:38:06 compute-0 sudo[167610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:06 compute-0 sudo[167610]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:06 compute-0 sudo[167635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:38:06 compute-0 sudo[167635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:06 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Jan 21 23:38:06 compute-0 sudo[167635]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:06 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 23:38:06 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 21 23:38:06 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 23:38:06 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 21 23:38:06 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 23:38:06 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 23:38:06 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 23:38:06 compute-0 sudo[167662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:38:06 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 21 23:38:06 compute-0 sudo[167662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:06 compute-0 sudo[167662]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:06 compute-0 sudo[167687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:38:06 compute-0 sudo[167687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:38:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:06.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:38:06 compute-0 podman[167748]: 2026-01-21 23:38:06.88747804 +0000 UTC m=+0.066048501 container create 8774d5fbbb2dd3e300fcaec68fa6f3629638cca22b410d38370c840fdbcb0315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:38:06 compute-0 systemd[1]: Started libpod-conmon-8774d5fbbb2dd3e300fcaec68fa6f3629638cca22b410d38370c840fdbcb0315.scope.
Jan 21 23:38:06 compute-0 podman[167748]: 2026-01-21 23:38:06.849915363 +0000 UTC m=+0.028485914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:38:06 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:38:07 compute-0 podman[167748]: 2026-01-21 23:38:07.009173177 +0000 UTC m=+0.187743728 container init 8774d5fbbb2dd3e300fcaec68fa6f3629638cca22b410d38370c840fdbcb0315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:38:07 compute-0 podman[167748]: 2026-01-21 23:38:07.024695646 +0000 UTC m=+0.203266137 container start 8774d5fbbb2dd3e300fcaec68fa6f3629638cca22b410d38370c840fdbcb0315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 23:38:07 compute-0 podman[167748]: 2026-01-21 23:38:07.02941227 +0000 UTC m=+0.207982771 container attach 8774d5fbbb2dd3e300fcaec68fa6f3629638cca22b410d38370c840fdbcb0315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:38:07 compute-0 gallant_kare[167764]: 167 167
Jan 21 23:38:07 compute-0 systemd[1]: libpod-8774d5fbbb2dd3e300fcaec68fa6f3629638cca22b410d38370c840fdbcb0315.scope: Deactivated successfully.
Jan 21 23:38:07 compute-0 podman[167748]: 2026-01-21 23:38:07.0330643 +0000 UTC m=+0.211634821 container died 8774d5fbbb2dd3e300fcaec68fa6f3629638cca22b410d38370c840fdbcb0315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:38:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-37a74282deadb4de827e965d9a5793a0950f5e645472121ba9c3f058786b5db2-merged.mount: Deactivated successfully.
Jan 21 23:38:07 compute-0 podman[167748]: 2026-01-21 23:38:07.094783779 +0000 UTC m=+0.273354270 container remove 8774d5fbbb2dd3e300fcaec68fa6f3629638cca22b410d38370c840fdbcb0315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:38:07 compute-0 systemd[1]: libpod-conmon-8774d5fbbb2dd3e300fcaec68fa6f3629638cca22b410d38370c840fdbcb0315.scope: Deactivated successfully.
Jan 21 23:38:07 compute-0 ceph-mon[74318]: pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:07 compute-0 podman[167788]: 2026-01-21 23:38:07.3440687 +0000 UTC m=+0.077962312 container create abbf4ecc36e7603390301a9835d71096ac565957f206e3a91a7e4a11e82db3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_yonath, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:38:07 compute-0 podman[167788]: 2026-01-21 23:38:07.311011849 +0000 UTC m=+0.044905521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:38:07 compute-0 systemd[1]: Started libpod-conmon-abbf4ecc36e7603390301a9835d71096ac565957f206e3a91a7e4a11e82db3ba.scope.
Jan 21 23:38:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:38:07 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a1a982d826d5deb6df7383ee4239cb7c01d2e9bd5a225bcdb37a6d7ef29b3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a1a982d826d5deb6df7383ee4239cb7c01d2e9bd5a225bcdb37a6d7ef29b3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a1a982d826d5deb6df7383ee4239cb7c01d2e9bd5a225bcdb37a6d7ef29b3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a1a982d826d5deb6df7383ee4239cb7c01d2e9bd5a225bcdb37a6d7ef29b3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:38:07 compute-0 podman[167788]: 2026-01-21 23:38:07.487871286 +0000 UTC m=+0.221764878 container init abbf4ecc36e7603390301a9835d71096ac565957f206e3a91a7e4a11e82db3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:38:07 compute-0 podman[167788]: 2026-01-21 23:38:07.496517448 +0000 UTC m=+0.230411040 container start abbf4ecc36e7603390301a9835d71096ac565957f206e3a91a7e4a11e82db3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:38:07 compute-0 podman[167788]: 2026-01-21 23:38:07.500299993 +0000 UTC m=+0.234193585 container attach abbf4ecc36e7603390301a9835d71096ac565957f206e3a91a7e4a11e82db3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:38:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:07.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]: {
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:     "1": [
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:         {
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:             "devices": [
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:                 "/dev/loop3"
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:             ],
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:             "lv_name": "ceph_lv0",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:             "lv_size": "7511998464",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:             "name": "ceph_lv0",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:             "tags": {
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:                 "ceph.cluster_name": "ceph",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:                 "ceph.crush_device_class": "",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:                 "ceph.encrypted": "0",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:                 "ceph.osd_id": "1",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:                 "ceph.type": "block",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:                 "ceph.vdo": "0"
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:             },
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:             "type": "block",
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:             "vg_name": "ceph_vg0"
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:         }
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]:     ]
Jan 21 23:38:08 compute-0 peaceful_yonath[167807]: }
Jan 21 23:38:08 compute-0 systemd[1]: libpod-abbf4ecc36e7603390301a9835d71096ac565957f206e3a91a7e4a11e82db3ba.scope: Deactivated successfully.
Jan 21 23:38:08 compute-0 podman[167788]: 2026-01-21 23:38:08.290973981 +0000 UTC m=+1.024867563 container died abbf4ecc36e7603390301a9835d71096ac565957f206e3a91a7e4a11e82db3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:38:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-08a1a982d826d5deb6df7383ee4239cb7c01d2e9bd5a225bcdb37a6d7ef29b3c-merged.mount: Deactivated successfully.
Jan 21 23:38:08 compute-0 podman[167788]: 2026-01-21 23:38:08.351771312 +0000 UTC m=+1.085664894 container remove abbf4ecc36e7603390301a9835d71096ac565957f206e3a91a7e4a11e82db3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:38:08 compute-0 systemd[1]: libpod-conmon-abbf4ecc36e7603390301a9835d71096ac565957f206e3a91a7e4a11e82db3ba.scope: Deactivated successfully.
Jan 21 23:38:08 compute-0 sudo[167687]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:08 compute-0 sudo[167830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:38:08 compute-0 sudo[167830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:08 compute-0 sudo[167830]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:08 compute-0 sudo[167855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:38:08 compute-0 sudo[167855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:08 compute-0 sudo[167855]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:08 compute-0 sudo[167880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:38:08 compute-0 sudo[167880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:08 compute-0 sudo[167880]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:08 compute-0 sudo[167905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:38:08 compute-0 sudo[167905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:08.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:09 compute-0 podman[167970]: 2026-01-21 23:38:09.11403105 +0000 UTC m=+0.086689276 container create 8b01f6a5d6aa120b641a717e99f5903b985c99cc463961e9908819c4642393e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 23:38:09 compute-0 podman[167970]: 2026-01-21 23:38:09.070822592 +0000 UTC m=+0.043480898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:38:09 compute-0 systemd[1]: Started libpod-conmon-8b01f6a5d6aa120b641a717e99f5903b985c99cc463961e9908819c4642393e9.scope.
Jan 21 23:38:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:38:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:38:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:38:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:38:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:38:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:38:09 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:38:09 compute-0 ceph-mon[74318]: pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:09 compute-0 podman[167970]: 2026-01-21 23:38:09.260809387 +0000 UTC m=+0.233467643 container init 8b01f6a5d6aa120b641a717e99f5903b985c99cc463961e9908819c4642393e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hawking, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Jan 21 23:38:09 compute-0 podman[167970]: 2026-01-21 23:38:09.270792068 +0000 UTC m=+0.243450294 container start 8b01f6a5d6aa120b641a717e99f5903b985c99cc463961e9908819c4642393e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hawking, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 23:38:09 compute-0 stupefied_hawking[167986]: 167 167
Jan 21 23:38:09 compute-0 systemd[1]: libpod-8b01f6a5d6aa120b641a717e99f5903b985c99cc463961e9908819c4642393e9.scope: Deactivated successfully.
Jan 21 23:38:09 compute-0 podman[167970]: 2026-01-21 23:38:09.279077459 +0000 UTC m=+0.251735785 container attach 8b01f6a5d6aa120b641a717e99f5903b985c99cc463961e9908819c4642393e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:38:09 compute-0 podman[167970]: 2026-01-21 23:38:09.281148582 +0000 UTC m=+0.253806848 container died 8b01f6a5d6aa120b641a717e99f5903b985c99cc463961e9908819c4642393e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 23:38:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a5843d9eb2bbcddbee7c050cfd2d598b41708265e3f1c6683e1e280ab8329cc-merged.mount: Deactivated successfully.
Jan 21 23:38:09 compute-0 podman[167970]: 2026-01-21 23:38:09.329330151 +0000 UTC m=+0.301988407 container remove 8b01f6a5d6aa120b641a717e99f5903b985c99cc463961e9908819c4642393e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hawking, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 23:38:09 compute-0 systemd[1]: libpod-conmon-8b01f6a5d6aa120b641a717e99f5903b985c99cc463961e9908819c4642393e9.scope: Deactivated successfully.
Jan 21 23:38:09 compute-0 podman[168011]: 2026-01-21 23:38:09.484415909 +0000 UTC m=+0.041788356 container create 64e491e416f0d2b92f102b87a1e2800c1b104de751e3588e07bc702ac00868c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 23:38:09 compute-0 systemd[1]: Started libpod-conmon-64e491e416f0d2b92f102b87a1e2800c1b104de751e3588e07bc702ac00868c0.scope.
Jan 21 23:38:09 compute-0 auditd[702]: Audit daemon rotating log files
Jan 21 23:38:09 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9631be18aec66c7b8649e0ff01931be64d9fa69addc45dbbc6f91fe09ee103/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9631be18aec66c7b8649e0ff01931be64d9fa69addc45dbbc6f91fe09ee103/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9631be18aec66c7b8649e0ff01931be64d9fa69addc45dbbc6f91fe09ee103/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9631be18aec66c7b8649e0ff01931be64d9fa69addc45dbbc6f91fe09ee103/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:38:09 compute-0 podman[168011]: 2026-01-21 23:38:09.468376754 +0000 UTC m=+0.025749221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:38:09 compute-0 podman[168011]: 2026-01-21 23:38:09.581666925 +0000 UTC m=+0.139039392 container init 64e491e416f0d2b92f102b87a1e2800c1b104de751e3588e07bc702ac00868c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 21 23:38:09 compute-0 podman[168011]: 2026-01-21 23:38:09.590482602 +0000 UTC m=+0.147855079 container start 64e491e416f0d2b92f102b87a1e2800c1b104de751e3588e07bc702ac00868c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:38:09 compute-0 podman[168011]: 2026-01-21 23:38:09.608137247 +0000 UTC m=+0.165509694 container attach 64e491e416f0d2b92f102b87a1e2800c1b104de751e3588e07bc702ac00868c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:38:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:09.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:10 compute-0 pensive_brahmagupta[168027]: {
Jan 21 23:38:10 compute-0 pensive_brahmagupta[168027]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:38:10 compute-0 pensive_brahmagupta[168027]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:38:10 compute-0 pensive_brahmagupta[168027]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:38:10 compute-0 pensive_brahmagupta[168027]:         "osd_id": 1,
Jan 21 23:38:10 compute-0 pensive_brahmagupta[168027]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:38:10 compute-0 pensive_brahmagupta[168027]:         "type": "bluestore"
Jan 21 23:38:10 compute-0 pensive_brahmagupta[168027]:     }
Jan 21 23:38:10 compute-0 pensive_brahmagupta[168027]: }
Jan 21 23:38:10 compute-0 systemd[1]: libpod-64e491e416f0d2b92f102b87a1e2800c1b104de751e3588e07bc702ac00868c0.scope: Deactivated successfully.
Jan 21 23:38:10 compute-0 conmon[168027]: conmon 64e491e416f0d2b92f10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-64e491e416f0d2b92f102b87a1e2800c1b104de751e3588e07bc702ac00868c0.scope/container/memory.events
Jan 21 23:38:10 compute-0 podman[168011]: 2026-01-21 23:38:10.50098162 +0000 UTC m=+1.058354087 container died 64e491e416f0d2b92f102b87a1e2800c1b104de751e3588e07bc702ac00868c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:38:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b9631be18aec66c7b8649e0ff01931be64d9fa69addc45dbbc6f91fe09ee103-merged.mount: Deactivated successfully.
Jan 21 23:38:10 compute-0 podman[168011]: 2026-01-21 23:38:10.581626592 +0000 UTC m=+1.138999079 container remove 64e491e416f0d2b92f102b87a1e2800c1b104de751e3588e07bc702ac00868c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:38:10 compute-0 systemd[1]: libpod-conmon-64e491e416f0d2b92f102b87a1e2800c1b104de751e3588e07bc702ac00868c0.scope: Deactivated successfully.
Jan 21 23:38:10 compute-0 sudo[167905]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:38:10 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:38:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:38:10 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:38:10 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 7fe6fbb7-505d-4927-8657-ef7eb4590f98 does not exist
Jan 21 23:38:10 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 0c517f1b-0997-468b-ad85-1d5b20b67a0c does not exist
Jan 21 23:38:10 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev cbdc2dfe-7cdb-4caf-b2c5-fa15f3537624 does not exist
Jan 21 23:38:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:10.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:10 compute-0 sudo[168063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:38:10 compute-0 sudo[168063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:10 compute-0 sudo[168063]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:10 compute-0 sudo[168088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:38:10 compute-0 sudo[168088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:10 compute-0 sudo[168088]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:11 compute-0 ceph-mon[74318]: pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:38:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:38:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:11.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:38:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:12.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:13 compute-0 ceph-mon[74318]: pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:13.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:38:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:14.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:38:15 compute-0 ceph-mon[74318]: pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:15.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:15 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Jan 21 23:38:15 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 23:38:15 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 21 23:38:15 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 23:38:15 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 21 23:38:15 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 23:38:15 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 23:38:15 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 23:38:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:16 compute-0 ceph-mon[74318]: pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:38:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:16.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:38:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:38:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:17.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:18.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:18 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 21 23:38:18 compute-0 podman[168124]: 2026-01-21 23:38:18.966406798 +0000 UTC m=+0.068653214 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 21 23:38:19 compute-0 ceph-mon[74318]: pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:19 compute-0 sudo[168144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:38:19 compute-0 sudo[168144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:19 compute-0 sudo[168144]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:19 compute-0 sudo[168169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:38:19 compute-0 sudo[168169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:19 compute-0 sudo[168169]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:19.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:20.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:21 compute-0 ceph-mon[74318]: pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:38:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:21.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:38:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:38:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:38:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:22.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:38:23 compute-0 ceph-mon[74318]: pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:38:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:23.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:38:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:38:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:24.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:38:25 compute-0 ceph-mon[74318]: pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:25.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:38:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:26.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:38:27 compute-0 ceph-mon[74318]: pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:38:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:27.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:28.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:29 compute-0 ceph-mon[74318]: pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:29.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:30 compute-0 ceph-mon[74318]: pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:38:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:30.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:38:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:31.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:32 compute-0 podman[170779]: 2026-01-21 23:38:32.009160589 +0000 UTC m=+0.113994695 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 23:38:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:38:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:38:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:32.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:38:33 compute-0 ceph-mon[74318]: pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:33.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:38:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:34.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:38:35 compute-0 ceph-mon[74318]: pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:35.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:36.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:37 compute-0 ceph-mon[74318]: pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:38:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:38:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:37.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:38:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:38.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:39 compute-0 ceph-mon[74318]: pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:38:39
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'images', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'volumes']
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:38:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:38:39 compute-0 sudo[174671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:38:39 compute-0 sudo[174671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:39 compute-0 sudo[174671]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:39 compute-0 sudo[174737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:38:39 compute-0 sudo[174737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:39 compute-0 sudo[174737]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:39.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:40.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:41 compute-0 ceph-mon[74318]: pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:41.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:38:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:42.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:43 compute-0 ceph-mon[74318]: pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:43.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:44.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:45 compute-0 ceph-mon[74318]: pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:45.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:38:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:46.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:38:47 compute-0 ceph-mon[74318]: pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:38:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:38:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:47.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:38:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:38:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:48.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:38:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:38:48.731 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:38:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:38:48.736 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:38:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:38:48.736 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:38:49 compute-0 ceph-mon[74318]: pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:49.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:49 compute-0 podman[180367]: 2026-01-21 23:38:49.977201484 +0000 UTC m=+0.076098010 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 23:38:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:50.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:51 compute-0 ceph-mon[74318]: pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:51.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:38:52 compute-0 ceph-mon[74318]: pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:38:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:52.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:38:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:53.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:38:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:38:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:54.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:55 compute-0 ceph-mon[74318]: pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:55.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:56.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:57 compute-0 ceph-mon[74318]: pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:38:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:57.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:38:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:38:58.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:38:59 compute-0 ceph-mon[74318]: pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:38:59 compute-0 sudo[185165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:38:59 compute-0 sudo[185165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:59 compute-0 sudo[185165]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:59 compute-0 sudo[185190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:38:59 compute-0 sudo[185190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:38:59 compute-0 sudo[185190]: pam_unix(sudo:session): session closed for user root
Jan 21 23:38:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:38:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:38:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:38:59.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:39:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:00.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:01 compute-0 ceph-mon[74318]: pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:01.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:39:02 compute-0 ceph-mon[74318]: pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:39:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:02.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:39:03 compute-0 podman[185232]: 2026-01-21 23:39:03.046395032 +0000 UTC m=+0.140327148 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 23:39:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:03.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:04.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:05 compute-0 ceph-mon[74318]: pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:05.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:06.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:07 compute-0 ceph-mon[74318]: pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:39:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:39:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:07.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:39:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:39:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:08.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:39:09 compute-0 ceph-mon[74318]: pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:39:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:39:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:39:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:39:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:39:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:39:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:39:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:09.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:39:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:10.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:11 compute-0 sudo[185266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:39:11 compute-0 sudo[185266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:11 compute-0 sudo[185266]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:11 compute-0 sudo[185294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:39:11 compute-0 sudo[185294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:11 compute-0 sudo[185294]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:11 compute-0 sudo[185320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:39:11 compute-0 sudo[185320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:11 compute-0 sudo[185320]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:11 compute-0 sudo[185345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:39:11 compute-0 sudo[185345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:11 compute-0 ceph-mon[74318]: pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:11.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:11 compute-0 kernel: SELinux:  Converting 2778 SID table entries...
Jan 21 23:39:11 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 23:39:11 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 21 23:39:11 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 23:39:11 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 21 23:39:11 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 23:39:11 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 23:39:11 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 23:39:12 compute-0 sudo[185345]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:39:12 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:39:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:39:12 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:39:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:39:12 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:39:12 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 1e123cd3-cd8f-4d19-9415-3a2af0f6664d does not exist
Jan 21 23:39:12 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 2cfe93db-116e-4efa-b50e-1d5cf35bcfaf does not exist
Jan 21 23:39:12 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev b7e90bfc-a51c-421d-b7e1-054257d03fb0 does not exist
Jan 21 23:39:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:39:12 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:39:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:39:12 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:39:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:39:12 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:39:12 compute-0 sudo[185403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:39:12 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 21 23:39:12 compute-0 sudo[185403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:12 compute-0 sudo[185403]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:12 compute-0 sudo[185428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:39:12 compute-0 sudo[185428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:12 compute-0 sudo[185428]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:39:12 compute-0 sudo[185453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:39:12 compute-0 sudo[185453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:12 compute-0 sudo[185453]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:12 compute-0 sudo[185478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:39:12 compute-0 sudo[185478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:12 compute-0 ceph-mon[74318]: pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:39:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:39:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:39:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:39:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:39:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:39:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:12.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:12 compute-0 groupadd[185562]: group added to /etc/group: name=dnsmasq, GID=992
Jan 21 23:39:12 compute-0 groupadd[185562]: group added to /etc/gshadow: name=dnsmasq
Jan 21 23:39:12 compute-0 podman[185548]: 2026-01-21 23:39:12.998249496 +0000 UTC m=+0.079383631 container create 15052a0ad11691f3656666c61fcd0c2c159f8724a1758d9ef849f62723489db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:39:13 compute-0 groupadd[185562]: new group: name=dnsmasq, GID=992
Jan 21 23:39:13 compute-0 systemd[1]: Started libpod-conmon-15052a0ad11691f3656666c61fcd0c2c159f8724a1758d9ef849f62723489db9.scope.
Jan 21 23:39:13 compute-0 podman[185548]: 2026-01-21 23:39:12.965824177 +0000 UTC m=+0.046958343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:39:13 compute-0 useradd[185572]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 21 23:39:13 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:39:13 compute-0 podman[185548]: 2026-01-21 23:39:13.100578745 +0000 UTC m=+0.181712870 container init 15052a0ad11691f3656666c61fcd0c2c159f8724a1758d9ef849f62723489db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:39:13 compute-0 podman[185548]: 2026-01-21 23:39:13.112164789 +0000 UTC m=+0.193298904 container start 15052a0ad11691f3656666c61fcd0c2c159f8724a1758d9ef849f62723489db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 21 23:39:13 compute-0 podman[185548]: 2026-01-21 23:39:13.117393828 +0000 UTC m=+0.198527953 container attach 15052a0ad11691f3656666c61fcd0c2c159f8724a1758d9ef849f62723489db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:39:13 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 21 23:39:13 compute-0 elated_williamson[185575]: 167 167
Jan 21 23:39:13 compute-0 systemd[1]: libpod-15052a0ad11691f3656666c61fcd0c2c159f8724a1758d9ef849f62723489db9.scope: Deactivated successfully.
Jan 21 23:39:13 compute-0 conmon[185575]: conmon 15052a0ad11691f36566 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15052a0ad11691f3656666c61fcd0c2c159f8724a1758d9ef849f62723489db9.scope/container/memory.events
Jan 21 23:39:13 compute-0 podman[185548]: 2026-01-21 23:39:13.125272768 +0000 UTC m=+0.206406883 container died 15052a0ad11691f3656666c61fcd0c2c159f8724a1758d9ef849f62723489db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 21 23:39:13 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 21 23:39:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c1cae081d22c97600845c3e32131f3c8022388ba030fdeebf1349fb3698bcf6-merged.mount: Deactivated successfully.
Jan 21 23:39:13 compute-0 podman[185548]: 2026-01-21 23:39:13.189179787 +0000 UTC m=+0.270313912 container remove 15052a0ad11691f3656666c61fcd0c2c159f8724a1758d9ef849f62723489db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 21 23:39:13 compute-0 systemd[1]: libpod-conmon-15052a0ad11691f3656666c61fcd0c2c159f8724a1758d9ef849f62723489db9.scope: Deactivated successfully.
Jan 21 23:39:13 compute-0 podman[185610]: 2026-01-21 23:39:13.436755793 +0000 UTC m=+0.071383347 container create 916995908d4cb7e275353078ac36dab32c8b35f7e56fad06186c7c4112c80040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 21 23:39:13 compute-0 systemd[1]: Started libpod-conmon-916995908d4cb7e275353078ac36dab32c8b35f7e56fad06186c7c4112c80040.scope.
Jan 21 23:39:13 compute-0 podman[185610]: 2026-01-21 23:39:13.404013935 +0000 UTC m=+0.038641539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:39:13 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01241bd6545a9493eacc1a9d7aa1be9de66b8d13b3cd929cb6e5c802e8f37734/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01241bd6545a9493eacc1a9d7aa1be9de66b8d13b3cd929cb6e5c802e8f37734/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01241bd6545a9493eacc1a9d7aa1be9de66b8d13b3cd929cb6e5c802e8f37734/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01241bd6545a9493eacc1a9d7aa1be9de66b8d13b3cd929cb6e5c802e8f37734/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01241bd6545a9493eacc1a9d7aa1be9de66b8d13b3cd929cb6e5c802e8f37734/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:39:13 compute-0 podman[185610]: 2026-01-21 23:39:13.560228127 +0000 UTC m=+0.194855731 container init 916995908d4cb7e275353078ac36dab32c8b35f7e56fad06186c7c4112c80040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jackson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 21 23:39:13 compute-0 podman[185610]: 2026-01-21 23:39:13.567193399 +0000 UTC m=+0.201820953 container start 916995908d4cb7e275353078ac36dab32c8b35f7e56fad06186c7c4112c80040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 23:39:13 compute-0 podman[185610]: 2026-01-21 23:39:13.571193651 +0000 UTC m=+0.205821185 container attach 916995908d4cb7e275353078ac36dab32c8b35f7e56fad06186c7c4112c80040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:39:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.002000062s ======
Jan 21 23:39:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:13.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000062s
Jan 21 23:39:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:14 compute-0 groupadd[185639]: group added to /etc/group: name=clevis, GID=991
Jan 21 23:39:14 compute-0 groupadd[185639]: group added to /etc/gshadow: name=clevis
Jan 21 23:39:14 compute-0 groupadd[185639]: new group: name=clevis, GID=991
Jan 21 23:39:14 compute-0 useradd[185648]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 21 23:39:14 compute-0 amazing_jackson[185627]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:39:14 compute-0 amazing_jackson[185627]: --> relative data size: 1.0
Jan 21 23:39:14 compute-0 amazing_jackson[185627]: --> All data devices are unavailable
Jan 21 23:39:14 compute-0 usermod[185662]: add 'clevis' to group 'tss'
Jan 21 23:39:14 compute-0 usermod[185662]: add 'clevis' to shadow group 'tss'
Jan 21 23:39:14 compute-0 systemd[1]: libpod-916995908d4cb7e275353078ac36dab32c8b35f7e56fad06186c7c4112c80040.scope: Deactivated successfully.
Jan 21 23:39:14 compute-0 podman[185610]: 2026-01-21 23:39:14.400723317 +0000 UTC m=+1.035350871 container died 916995908d4cb7e275353078ac36dab32c8b35f7e56fad06186c7c4112c80040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 23:39:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-01241bd6545a9493eacc1a9d7aa1be9de66b8d13b3cd929cb6e5c802e8f37734-merged.mount: Deactivated successfully.
Jan 21 23:39:14 compute-0 podman[185610]: 2026-01-21 23:39:14.488940265 +0000 UTC m=+1.123567829 container remove 916995908d4cb7e275353078ac36dab32c8b35f7e56fad06186c7c4112c80040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jackson, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 23:39:14 compute-0 systemd[1]: libpod-conmon-916995908d4cb7e275353078ac36dab32c8b35f7e56fad06186c7c4112c80040.scope: Deactivated successfully.
Jan 21 23:39:14 compute-0 sudo[185478]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:14 compute-0 sudo[185685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:39:14 compute-0 sudo[185685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:14 compute-0 sudo[185685]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:14 compute-0 sudo[185710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:39:14 compute-0 sudo[185710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:14 compute-0 sudo[185710]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:14 compute-0 sudo[185735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:39:14 compute-0 sudo[185735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:14 compute-0 sudo[185735]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:14.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:14 compute-0 sudo[185760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:39:14 compute-0 sudo[185760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:15 compute-0 ceph-mon[74318]: pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:15 compute-0 podman[185829]: 2026-01-21 23:39:15.20228199 +0000 UTC m=+0.046153598 container create 600f8eba965b553d52ace920fa823cacd575cda98256885f2a2fb6e32b18f67e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_einstein, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 23:39:15 compute-0 systemd[1]: Started libpod-conmon-600f8eba965b553d52ace920fa823cacd575cda98256885f2a2fb6e32b18f67e.scope.
Jan 21 23:39:15 compute-0 podman[185829]: 2026-01-21 23:39:15.183637851 +0000 UTC m=+0.027509499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:39:15 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:39:15 compute-0 podman[185829]: 2026-01-21 23:39:15.306443085 +0000 UTC m=+0.150314783 container init 600f8eba965b553d52ace920fa823cacd575cda98256885f2a2fb6e32b18f67e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 23:39:15 compute-0 podman[185829]: 2026-01-21 23:39:15.318262735 +0000 UTC m=+0.162134353 container start 600f8eba965b553d52ace920fa823cacd575cda98256885f2a2fb6e32b18f67e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 23:39:15 compute-0 podman[185829]: 2026-01-21 23:39:15.321292828 +0000 UTC m=+0.165164476 container attach 600f8eba965b553d52ace920fa823cacd575cda98256885f2a2fb6e32b18f67e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_einstein, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 23:39:15 compute-0 festive_einstein[185849]: 167 167
Jan 21 23:39:15 compute-0 systemd[1]: libpod-600f8eba965b553d52ace920fa823cacd575cda98256885f2a2fb6e32b18f67e.scope: Deactivated successfully.
Jan 21 23:39:15 compute-0 podman[185829]: 2026-01-21 23:39:15.343239196 +0000 UTC m=+0.187110834 container died 600f8eba965b553d52ace920fa823cacd575cda98256885f2a2fb6e32b18f67e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_einstein, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-65d156f24a610c79b754be83bd7ce68df938a3aa01097c7263a1d6d8bbef4df6-merged.mount: Deactivated successfully.
Jan 21 23:39:15 compute-0 podman[185829]: 2026-01-21 23:39:15.399750329 +0000 UTC m=+0.243621967 container remove 600f8eba965b553d52ace920fa823cacd575cda98256885f2a2fb6e32b18f67e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:39:15 compute-0 systemd[1]: libpod-conmon-600f8eba965b553d52ace920fa823cacd575cda98256885f2a2fb6e32b18f67e.scope: Deactivated successfully.
Jan 21 23:39:15 compute-0 podman[185880]: 2026-01-21 23:39:15.642483888 +0000 UTC m=+0.059773202 container create 021c1c28b4b0d9d25f04090e250baea327cd20794328a09bf82798bb5bb3765d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_morse, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:39:15 compute-0 systemd[1]: Started libpod-conmon-021c1c28b4b0d9d25f04090e250baea327cd20794328a09bf82798bb5bb3765d.scope.
Jan 21 23:39:15 compute-0 podman[185880]: 2026-01-21 23:39:15.621798228 +0000 UTC m=+0.039087532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:39:15 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533f272999ac1e3b1cdedb24527115de15a149451869f7a4a7f36c41e8c0a972/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533f272999ac1e3b1cdedb24527115de15a149451869f7a4a7f36c41e8c0a972/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533f272999ac1e3b1cdedb24527115de15a149451869f7a4a7f36c41e8c0a972/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533f272999ac1e3b1cdedb24527115de15a149451869f7a4a7f36c41e8c0a972/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:39:15 compute-0 podman[185880]: 2026-01-21 23:39:15.75931511 +0000 UTC m=+0.176604484 container init 021c1c28b4b0d9d25f04090e250baea327cd20794328a09bf82798bb5bb3765d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:39:15 compute-0 podman[185880]: 2026-01-21 23:39:15.771410598 +0000 UTC m=+0.188699912 container start 021c1c28b4b0d9d25f04090e250baea327cd20794328a09bf82798bb5bb3765d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_morse, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 21 23:39:15 compute-0 podman[185880]: 2026-01-21 23:39:15.776449981 +0000 UTC m=+0.193739305 container attach 021c1c28b4b0d9d25f04090e250baea327cd20794328a09bf82798bb5bb3765d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_morse, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 23:39:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:15.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:16 compute-0 ceph-mon[74318]: pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:16 compute-0 recursing_morse[185896]: {
Jan 21 23:39:16 compute-0 recursing_morse[185896]:     "1": [
Jan 21 23:39:16 compute-0 recursing_morse[185896]:         {
Jan 21 23:39:16 compute-0 recursing_morse[185896]:             "devices": [
Jan 21 23:39:16 compute-0 recursing_morse[185896]:                 "/dev/loop3"
Jan 21 23:39:16 compute-0 recursing_morse[185896]:             ],
Jan 21 23:39:16 compute-0 recursing_morse[185896]:             "lv_name": "ceph_lv0",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:             "lv_size": "7511998464",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:             "name": "ceph_lv0",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:             "tags": {
Jan 21 23:39:16 compute-0 recursing_morse[185896]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:                 "ceph.cluster_name": "ceph",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:                 "ceph.crush_device_class": "",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:                 "ceph.encrypted": "0",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:                 "ceph.osd_id": "1",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:                 "ceph.type": "block",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:                 "ceph.vdo": "0"
Jan 21 23:39:16 compute-0 recursing_morse[185896]:             },
Jan 21 23:39:16 compute-0 recursing_morse[185896]:             "type": "block",
Jan 21 23:39:16 compute-0 recursing_morse[185896]:             "vg_name": "ceph_vg0"
Jan 21 23:39:16 compute-0 recursing_morse[185896]:         }
Jan 21 23:39:16 compute-0 recursing_morse[185896]:     ]
Jan 21 23:39:16 compute-0 recursing_morse[185896]: }
Jan 21 23:39:16 compute-0 systemd[1]: libpod-021c1c28b4b0d9d25f04090e250baea327cd20794328a09bf82798bb5bb3765d.scope: Deactivated successfully.
Jan 21 23:39:16 compute-0 podman[185880]: 2026-01-21 23:39:16.646747661 +0000 UTC m=+1.064036965 container died 021c1c28b4b0d9d25f04090e250baea327cd20794328a09bf82798bb5bb3765d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 23:39:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-533f272999ac1e3b1cdedb24527115de15a149451869f7a4a7f36c41e8c0a972-merged.mount: Deactivated successfully.
Jan 21 23:39:16 compute-0 podman[185880]: 2026-01-21 23:39:16.711444962 +0000 UTC m=+1.128734256 container remove 021c1c28b4b0d9d25f04090e250baea327cd20794328a09bf82798bb5bb3765d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:39:16 compute-0 systemd[1]: libpod-conmon-021c1c28b4b0d9d25f04090e250baea327cd20794328a09bf82798bb5bb3765d.scope: Deactivated successfully.
Jan 21 23:39:16 compute-0 sudo[185760]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:16.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:16 compute-0 sudo[185916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:39:16 compute-0 sudo[185916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:16 compute-0 sudo[185916]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:16 compute-0 sudo[185941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:39:16 compute-0 sudo[185941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:16 compute-0 sudo[185941]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:16 compute-0 sudo[185967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:39:16 compute-0 sudo[185967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:16 compute-0 sudo[185967]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:17 compute-0 sudo[185995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:39:17 compute-0 sudo[185995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:17 compute-0 polkitd[43428]: Reloading rules
Jan 21 23:39:17 compute-0 polkitd[43428]: Collecting garbage unconditionally...
Jan 21 23:39:17 compute-0 polkitd[43428]: Loading rules from directory /etc/polkit-1/rules.d
Jan 21 23:39:17 compute-0 polkitd[43428]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 21 23:39:17 compute-0 polkitd[43428]: Finished loading, compiling and executing 3 rules
Jan 21 23:39:17 compute-0 polkitd[43428]: Reloading rules
Jan 21 23:39:17 compute-0 polkitd[43428]: Collecting garbage unconditionally...
Jan 21 23:39:17 compute-0 polkitd[43428]: Loading rules from directory /etc/polkit-1/rules.d
Jan 21 23:39:17 compute-0 polkitd[43428]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 21 23:39:17 compute-0 polkitd[43428]: Finished loading, compiling and executing 3 rules
Jan 21 23:39:17 compute-0 podman[186103]: 2026-01-21 23:39:17.392867974 +0000 UTC m=+0.054642526 container create 3f0c32a15601719a9aec093bfa1e25f20a7734ba5efd4763e1e62c71641ed59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:39:17 compute-0 systemd[1]: Started libpod-conmon-3f0c32a15601719a9aec093bfa1e25f20a7734ba5efd4763e1e62c71641ed59c.scope.
Jan 21 23:39:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:39:17 compute-0 podman[186103]: 2026-01-21 23:39:17.371755021 +0000 UTC m=+0.033529613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:39:17 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:39:17 compute-0 podman[186103]: 2026-01-21 23:39:17.484162626 +0000 UTC m=+0.145937228 container init 3f0c32a15601719a9aec093bfa1e25f20a7734ba5efd4763e1e62c71641ed59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 23:39:17 compute-0 podman[186103]: 2026-01-21 23:39:17.500939268 +0000 UTC m=+0.162713820 container start 3f0c32a15601719a9aec093bfa1e25f20a7734ba5efd4763e1e62c71641ed59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 21 23:39:17 compute-0 podman[186103]: 2026-01-21 23:39:17.5052676 +0000 UTC m=+0.167042192 container attach 3f0c32a15601719a9aec093bfa1e25f20a7734ba5efd4763e1e62c71641ed59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 23:39:17 compute-0 eager_visvesvaraya[186134]: 167 167
Jan 21 23:39:17 compute-0 systemd[1]: libpod-3f0c32a15601719a9aec093bfa1e25f20a7734ba5efd4763e1e62c71641ed59c.scope: Deactivated successfully.
Jan 21 23:39:17 compute-0 conmon[186134]: conmon 3f0c32a15601719a9aec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f0c32a15601719a9aec093bfa1e25f20a7734ba5efd4763e1e62c71641ed59c.scope/container/memory.events
Jan 21 23:39:17 compute-0 podman[186103]: 2026-01-21 23:39:17.509307254 +0000 UTC m=+0.171081816 container died 3f0c32a15601719a9aec093bfa1e25f20a7734ba5efd4763e1e62c71641ed59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:39:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-18740c8a137eaf536a121c6a8469c19f3202f19d4d504b04c78413fa3126de84-merged.mount: Deactivated successfully.
Jan 21 23:39:17 compute-0 podman[186103]: 2026-01-21 23:39:17.556912964 +0000 UTC m=+0.218687526 container remove 3f0c32a15601719a9aec093bfa1e25f20a7734ba5efd4763e1e62c71641ed59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:39:17 compute-0 systemd[1]: libpod-conmon-3f0c32a15601719a9aec093bfa1e25f20a7734ba5efd4763e1e62c71641ed59c.scope: Deactivated successfully.
Jan 21 23:39:17 compute-0 podman[186184]: 2026-01-21 23:39:17.732220958 +0000 UTC m=+0.045780307 container create b9e77f8c661fde2f5d9a99cba1c87eda200da775ec4627916582a0726765a021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_davinci, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:39:17 compute-0 systemd[1]: Started libpod-conmon-b9e77f8c661fde2f5d9a99cba1c87eda200da775ec4627916582a0726765a021.scope.
Jan 21 23:39:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:17.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:17 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:39:17 compute-0 podman[186184]: 2026-01-21 23:39:17.710812396 +0000 UTC m=+0.024371755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113f3bb19ee2594fc9d76820e70585a0d3d9dd8b2243ab6935f9f142c78a9a3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113f3bb19ee2594fc9d76820e70585a0d3d9dd8b2243ab6935f9f142c78a9a3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113f3bb19ee2594fc9d76820e70585a0d3d9dd8b2243ab6935f9f142c78a9a3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113f3bb19ee2594fc9d76820e70585a0d3d9dd8b2243ab6935f9f142c78a9a3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:39:17 compute-0 podman[186184]: 2026-01-21 23:39:17.817721654 +0000 UTC m=+0.131281023 container init b9e77f8c661fde2f5d9a99cba1c87eda200da775ec4627916582a0726765a021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_davinci, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:39:17 compute-0 podman[186184]: 2026-01-21 23:39:17.83036997 +0000 UTC m=+0.143929309 container start b9e77f8c661fde2f5d9a99cba1c87eda200da775ec4627916582a0726765a021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:39:17 compute-0 podman[186184]: 2026-01-21 23:39:17.833413023 +0000 UTC m=+0.146972372 container attach b9e77f8c661fde2f5d9a99cba1c87eda200da775ec4627916582a0726765a021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_davinci, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 21 23:39:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:18 compute-0 groupadd[186282]: group added to /etc/group: name=ceph, GID=167
Jan 21 23:39:18 compute-0 groupadd[186282]: group added to /etc/gshadow: name=ceph
Jan 21 23:39:18 compute-0 groupadd[186282]: new group: name=ceph, GID=167
Jan 21 23:39:18 compute-0 useradd[186288]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 21 23:39:18 compute-0 priceless_davinci[186218]: {
Jan 21 23:39:18 compute-0 priceless_davinci[186218]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:39:18 compute-0 priceless_davinci[186218]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:39:18 compute-0 priceless_davinci[186218]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:39:18 compute-0 priceless_davinci[186218]:         "osd_id": 1,
Jan 21 23:39:18 compute-0 priceless_davinci[186218]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:39:18 compute-0 priceless_davinci[186218]:         "type": "bluestore"
Jan 21 23:39:18 compute-0 priceless_davinci[186218]:     }
Jan 21 23:39:18 compute-0 priceless_davinci[186218]: }
Jan 21 23:39:18 compute-0 systemd[1]: libpod-b9e77f8c661fde2f5d9a99cba1c87eda200da775ec4627916582a0726765a021.scope: Deactivated successfully.
Jan 21 23:39:18 compute-0 podman[186184]: 2026-01-21 23:39:18.76469467 +0000 UTC m=+1.078254099 container died b9e77f8c661fde2f5d9a99cba1c87eda200da775ec4627916582a0726765a021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:39:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:18.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-113f3bb19ee2594fc9d76820e70585a0d3d9dd8b2243ab6935f9f142c78a9a3b-merged.mount: Deactivated successfully.
Jan 21 23:39:18 compute-0 podman[186184]: 2026-01-21 23:39:18.832717854 +0000 UTC m=+1.146277193 container remove b9e77f8c661fde2f5d9a99cba1c87eda200da775ec4627916582a0726765a021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:39:18 compute-0 systemd[1]: libpod-conmon-b9e77f8c661fde2f5d9a99cba1c87eda200da775ec4627916582a0726765a021.scope: Deactivated successfully.
Jan 21 23:39:18 compute-0 sudo[185995]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:39:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:39:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:39:19 compute-0 ceph-mon[74318]: pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:39:19 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 0405c0e1-4876-489f-9abc-22b91a357474 does not exist
Jan 21 23:39:19 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev ab98565f-1fb3-4b16-8637-fd9cce966181 does not exist
Jan 21 23:39:19 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 343c741e-6b55-4f44-a838-cb92ecf54af4 does not exist
Jan 21 23:39:19 compute-0 sudo[186324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:39:19 compute-0 sudo[186324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:19 compute-0 sudo[186324]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:19 compute-0 sudo[186349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:39:19 compute-0 sudo[186349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:19 compute-0 sudo[186349]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:19 compute-0 sudo[186374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:39:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:39:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:19.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:39:19 compute-0 sudo[186374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:19 compute-0 sudo[186374]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:19 compute-0 sudo[186399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:39:19 compute-0 sudo[186399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:19 compute-0 sudo[186399]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:39:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:39:20 compute-0 podman[186426]: 2026-01-21 23:39:20.362893526 +0000 UTC m=+0.091110547 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:39:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:39:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:20.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:39:21 compute-0 ceph-mon[74318]: pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:21 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 21 23:39:21 compute-0 sshd[1007]: Received signal 15; terminating.
Jan 21 23:39:21 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 21 23:39:21 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 21 23:39:21 compute-0 systemd[1]: sshd.service: Consumed 3.891s CPU time, read 564.0K from disk, written 0B to disk.
Jan 21 23:39:21 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 21 23:39:21 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 21 23:39:21 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 23:39:21 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 23:39:21 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 23:39:21 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 21 23:39:21 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 21 23:39:21 compute-0 sshd[187063]: Server listening on 0.0.0.0 port 22.
Jan 21 23:39:21 compute-0 sshd[187063]: Server listening on :: port 22.
Jan 21 23:39:21 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 21 23:39:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:21.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:22 compute-0 ceph-mon[74318]: pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:39:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:39:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:22.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:39:23 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 23:39:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:23.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:23 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 23:39:23 compute-0 systemd[1]: Reloading.
Jan 21 23:39:23 compute-0 systemd-rc-local-generator[187322]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:39:23 compute-0 systemd-sysv-generator[187326]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:39:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:24 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 23:39:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:24.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:25.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:25 compute-0 ceph-mon[74318]: pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:26 compute-0 sudo[166949]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:39:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:26.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:39:26 compute-0 ceph-mon[74318]: pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:39:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:27.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:39:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:28.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:39:29 compute-0 ceph-mon[74318]: pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:29.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:39:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:30.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:39:31 compute-0 ceph-mon[74318]: pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:31.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:39:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:32.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 23:39:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 23:39:33 compute-0 systemd[1]: man-db-cache-update.service: Consumed 11.791s CPU time.
Jan 21 23:39:33 compute-0 systemd[1]: run-r470e07812f1646a5ab85d1540681a9be.service: Deactivated successfully.
Jan 21 23:39:33 compute-0 ceph-mon[74318]: pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:33.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:34 compute-0 podman[195729]: 2026-01-21 23:39:34.05890967 +0000 UTC m=+0.164092575 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 21 23:39:34 compute-0 ceph-mon[74318]: pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:39:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:34.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:39:35 compute-0 sudo[195880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuqiwcvuvtprgfdrkktmxavyzgcccxzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038774.6808481-968-121287269132076/AnsiballZ_systemd.py'
Jan 21 23:39:35 compute-0 sudo[195880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:35 compute-0 python3.9[195882]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 23:39:35 compute-0 systemd[1]: Reloading.
Jan 21 23:39:35 compute-0 systemd-sysv-generator[195915]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:39:35 compute-0 systemd-rc-local-generator[195908]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:39:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:35.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:35 compute-0 sudo[195880]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:36 compute-0 sudo[196071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eemmfckbiytxhapdqgnwkjvsbpwyoboo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038776.1770294-968-35564204849964/AnsiballZ_systemd.py'
Jan 21 23:39:36 compute-0 sudo[196071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:36 compute-0 python3.9[196073]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 23:39:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:36.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:36 compute-0 systemd[1]: Reloading.
Jan 21 23:39:36 compute-0 systemd-sysv-generator[196108]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:39:36 compute-0 systemd-rc-local-generator[196104]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:39:37 compute-0 ceph-mon[74318]: pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:37 compute-0 sudo[196071]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:39:37 compute-0 sudo[196262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nopdxavyftxlfksphshqgndlbtzsfgkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038777.3472872-968-145179094266794/AnsiballZ_systemd.py'
Jan 21 23:39:37 compute-0 sudo[196262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:37.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:38 compute-0 python3.9[196264]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 23:39:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:38 compute-0 systemd[1]: Reloading.
Jan 21 23:39:38 compute-0 systemd-sysv-generator[196296]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:39:38 compute-0 systemd-rc-local-generator[196292]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:39:38 compute-0 sudo[196262]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:39:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:38.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:39:39 compute-0 sudo[196452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixldlytlqumjbhcjbumemgovcrkdhjbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038778.663141-968-93300385528764/AnsiballZ_systemd.py'
Jan 21 23:39:39 compute-0 sudo[196452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:39 compute-0 ceph-mon[74318]: pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:39:39
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.meta', 'images', '.rgw.root']
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:39:39 compute-0 python3.9[196454]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:39:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:39:39 compute-0 systemd[1]: Reloading.
Jan 21 23:39:39 compute-0 systemd-rc-local-generator[196486]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:39:39 compute-0 systemd-sysv-generator[196490]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:39:39 compute-0 sudo[196452]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:39.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:39 compute-0 sudo[196518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:39:39 compute-0 sudo[196518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:39 compute-0 sudo[196518]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:40 compute-0 sudo[196566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:39:40 compute-0 sudo[196566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:39:40 compute-0 sudo[196566]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:40 compute-0 sudo[196693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdaqlrkvfgznlyqrdbxhrusfutoawkff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038779.988692-1055-150807349399689/AnsiballZ_systemd.py'
Jan 21 23:39:40 compute-0 sudo[196693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:40 compute-0 python3.9[196695]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:39:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:40.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:40 compute-0 systemd[1]: Reloading.
Jan 21 23:39:40 compute-0 systemd-rc-local-generator[196727]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:39:40 compute-0 systemd-sysv-generator[196730]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:39:41 compute-0 sudo[196693]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:41 compute-0 ceph-mon[74318]: pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:41 compute-0 sudo[196885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nshpdmywpwvevtoetrcpmruwmblfhldu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038781.36735-1055-245731070918950/AnsiballZ_systemd.py'
Jan 21 23:39:41 compute-0 sudo[196885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:41.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:42 compute-0 python3.9[196887]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:39:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:39:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:42.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:43 compute-0 systemd[1]: Reloading.
Jan 21 23:39:43 compute-0 ceph-mon[74318]: pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:43 compute-0 systemd-rc-local-generator[196916]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:39:43 compute-0 systemd-sysv-generator[196921]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:39:43 compute-0 sudo[196885]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:43.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:43 compute-0 sudo[197076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyedsvdgfavyflafzklpkbkqfjauomih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038783.635193-1055-259657085429355/AnsiballZ_systemd.py'
Jan 21 23:39:43 compute-0 sudo[197076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:44 compute-0 python3.9[197078]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:39:44 compute-0 systemd[1]: Reloading.
Jan 21 23:39:44 compute-0 systemd-rc-local-generator[197109]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:39:44 compute-0 systemd-sysv-generator[197113]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:39:44 compute-0 sudo[197076]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:44.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:45 compute-0 ceph-mon[74318]: pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:45 compute-0 sudo[197265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiewzgquzlnfjitsmeximamocmnydzsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038784.9234326-1055-92837399799557/AnsiballZ_systemd.py'
Jan 21 23:39:45 compute-0 sudo[197265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:45 compute-0 python3.9[197267]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:39:45 compute-0 sudo[197265]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:39:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:45.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:39:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:46 compute-0 sudo[197421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfzfxmdhvxtkjzpyqyekxnljslfumjpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038785.7903955-1055-13625195165364/AnsiballZ_systemd.py'
Jan 21 23:39:46 compute-0 sudo[197421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:46 compute-0 python3.9[197423]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:39:46 compute-0 systemd[1]: Reloading.
Jan 21 23:39:46 compute-0 systemd-rc-local-generator[197449]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:39:46 compute-0 systemd-sysv-generator[197455]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:39:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:46.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:46 compute-0 sudo[197421]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:47 compute-0 ceph-mon[74318]: pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:47 compute-0 sudo[197612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-islzfatyjndhpczgvfbitbnbjukfvgfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038787.0258036-1163-68106885697122/AnsiballZ_systemd.py'
Jan 21 23:39:47 compute-0 sudo[197612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:39:47 compute-0 python3.9[197614]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 23:39:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:47.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:47 compute-0 systemd[1]: Reloading.
Jan 21 23:39:47 compute-0 systemd-rc-local-generator[197643]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:39:47 compute-0 systemd-sysv-generator[197648]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:39:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:48 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 21 23:39:48 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 21 23:39:48 compute-0 sudo[197612]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:39:48.732 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:39:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:39:48.734 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:39:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:39:48.734 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:39:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:48.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:48 compute-0 sudo[197805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfpiwokrjfhgojxnmaqzkbjmvaxemxkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038788.4524496-1187-119240345944024/AnsiballZ_systemd.py'
Jan 21 23:39:48 compute-0 sudo[197805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:49 compute-0 python3.9[197807]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:39:49 compute-0 ceph-mon[74318]: pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:49 compute-0 sudo[197805]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:49 compute-0 sudo[197961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaxixmvvmeexptgmuvvttszqmubcvtlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038789.504097-1187-78093930138760/AnsiballZ_systemd.py'
Jan 21 23:39:49 compute-0 sudo[197961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:49.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:50 compute-0 python3.9[197963]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:39:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:39:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:50.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:39:50 compute-0 podman[197966]: 2026-01-21 23:39:50.985761584 +0000 UTC m=+0.087161258 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 21 23:39:51 compute-0 sudo[197961]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:51 compute-0 ceph-mon[74318]: pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:51 compute-0 sudo[198136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dadzflkyputzsohaovsewzmeunhphmql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038791.414413-1187-226927694811379/AnsiballZ_systemd.py'
Jan 21 23:39:51 compute-0 sudo[198136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:51.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:52 compute-0 python3.9[198138]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:39:52 compute-0 sudo[198136]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:39:52 compute-0 sudo[198291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psompenbbwfzghvxdcupzsameqrcliri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038792.3323405-1187-219303135545911/AnsiballZ_systemd.py'
Jan 21 23:39:52 compute-0 sudo[198291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:52.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:53 compute-0 python3.9[198293]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:39:53 compute-0 sudo[198291]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:53 compute-0 ceph-mon[74318]: pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:53 compute-0 sudo[198447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jagjnhfeoysbwzhnbidpjmztzllshrwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038793.2733514-1187-221890078053631/AnsiballZ_systemd.py'
Jan 21 23:39:53 compute-0 sudo[198447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:39:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:53.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:39:53 compute-0 python3.9[198449]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:39:54 compute-0 sudo[198447]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:39:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:39:54 compute-0 ceph-mon[74318]: pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:54 compute-0 sudo[198602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhpivelqpbvalnueerqqsvefkexthmyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038794.1930096-1187-110689659529320/AnsiballZ_systemd.py'
Jan 21 23:39:54 compute-0 sudo[198602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:39:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:54.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:39:54 compute-0 python3.9[198604]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:39:55 compute-0 sudo[198602]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:55 compute-0 sudo[198758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyhcxpkfbokorpzodiakmrskklzpkocl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038795.184677-1187-110514627291810/AnsiballZ_systemd.py'
Jan 21 23:39:55 compute-0 sudo[198758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:55 compute-0 python3.9[198760]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:39:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:55.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:55 compute-0 sudo[198758]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:56 compute-0 sudo[198913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zecbaawtlzrtvbpkuesmhnpibygmiifg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038796.1753476-1187-244531640058637/AnsiballZ_systemd.py'
Jan 21 23:39:56 compute-0 sudo[198913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:56 compute-0 python3.9[198915]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:39:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:56.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:56 compute-0 sudo[198913]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:57 compute-0 ceph-mon[74318]: pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:57 compute-0 sudo[199068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaqiiiqfaykltcrmxefqqutuggyvqfqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038797.042515-1187-56642411707263/AnsiballZ_systemd.py'
Jan 21 23:39:57 compute-0 sudo[199068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:39:57 compute-0 python3.9[199071]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:39:57 compute-0 sudo[199068]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:39:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:57.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:39:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:58 compute-0 sudo[199224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwcbcdwyloxvsxyvifsjridmxnqaqqtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038797.864522-1187-48440526835643/AnsiballZ_systemd.py'
Jan 21 23:39:58 compute-0 sudo[199224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:58 compute-0 python3.9[199226]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:39:58 compute-0 sudo[199224]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:39:58.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:39:59 compute-0 sudo[199379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vacogvibwqgsomdbofugudlurcqvhens ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038798.7561004-1187-14082540853529/AnsiballZ_systemd.py'
Jan 21 23:39:59 compute-0 sudo[199379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:39:59 compute-0 ceph-mon[74318]: pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:39:59 compute-0 python3.9[199381]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:39:59 compute-0 sudo[199379]: pam_unix(sudo:session): session closed for user root
Jan 21 23:39:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:39:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:39:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:39:59.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:00 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 21 23:40:00 compute-0 sudo[199535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gynowveipulkpemeqvxxyygxxodgxmeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038799.6595745-1187-254588173844318/AnsiballZ_systemd.py'
Jan 21 23:40:00 compute-0 sudo[199535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:00 compute-0 sudo[199538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:40:00 compute-0 sudo[199538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:00 compute-0 sudo[199538]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:00 compute-0 ceph-mon[74318]: overall HEALTH_OK
Jan 21 23:40:00 compute-0 sudo[199563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:40:00 compute-0 sudo[199563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:00 compute-0 sudo[199563]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:00 compute-0 python3.9[199537]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:40:00 compute-0 sudo[199535]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:40:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:00.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:40:00 compute-0 sudo[199740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzkevelmrxpbyvndzmdtluiauepunevk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038800.6642127-1187-48538137381934/AnsiballZ_systemd.py'
Jan 21 23:40:00 compute-0 sudo[199740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:01 compute-0 ceph-mon[74318]: pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:01 compute-0 python3.9[199742]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:40:01 compute-0 sudo[199740]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:01.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:01 compute-0 sudo[199896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afpiyapsflzdqfsdqrtavreulifuqwwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038801.548695-1187-101485549304863/AnsiballZ_systemd.py'
Jan 21 23:40:01 compute-0 sudo[199896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:02 compute-0 python3.9[199898]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 23:40:02 compute-0 sudo[199896]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:40:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:02.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:03 compute-0 ceph-mon[74318]: pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:03 compute-0 sudo[200052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hffgkkdkfcuduvkedzjmwsseknjfuulw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038803.3735247-1493-246494555740952/AnsiballZ_file.py'
Jan 21 23:40:03 compute-0 sudo[200052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:03.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:03 compute-0 python3.9[200054]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:40:03 compute-0 sudo[200052]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:04 compute-0 sudo[200219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjhcbuitcyvcbxtfdhmcegaezthnpnxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038804.0926838-1493-280526453511991/AnsiballZ_file.py'
Jan 21 23:40:04 compute-0 sudo[200219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:04 compute-0 podman[200178]: 2026-01-21 23:40:04.499504898 +0000 UTC m=+0.118499483 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 21 23:40:04 compute-0 python3.9[200224]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:40:04 compute-0 sudo[200219]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:04.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:05 compute-0 sudo[200380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsigsyrgbvalvlfhitdskmhixnyaacfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038804.846303-1493-157261510207940/AnsiballZ_file.py'
Jan 21 23:40:05 compute-0 sudo[200380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:05 compute-0 ceph-mon[74318]: pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:05 compute-0 python3.9[200382]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:40:05 compute-0 sudo[200380]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:05.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:05 compute-0 sudo[200533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfqdiiluxczbvgxyconpzhmpzjpgacbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038805.630864-1493-69930838427922/AnsiballZ_file.py'
Jan 21 23:40:05 compute-0 sudo[200533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:06 compute-0 python3.9[200535]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:40:06 compute-0 sudo[200533]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:06 compute-0 sudo[200685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drjraykqxochzepslsnublrnijhzhgjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038806.360284-1493-206911600691857/AnsiballZ_file.py'
Jan 21 23:40:06 compute-0 sudo[200685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:06.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:06 compute-0 python3.9[200687]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:40:06 compute-0 sudo[200685]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:07 compute-0 ceph-mon[74318]: pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:07 compute-0 sudo[200838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdjagogoxggugmbmnfbajzhkuxhamwhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038807.0709026-1493-123264965034769/AnsiballZ_file.py'
Jan 21 23:40:07 compute-0 sudo[200838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:40:07 compute-0 python3.9[200840]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:40:07 compute-0 sudo[200838]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:07.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:08 compute-0 python3.9[200990]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:40:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:40:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:08.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:40:09 compute-0 sudo[201140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvfxdborglqtdcmhbdwsheffsotkqipx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038808.760499-1646-169149851409800/AnsiballZ_stat.py'
Jan 21 23:40:09 compute-0 sudo[201140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:40:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:40:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:40:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:40:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:40:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:40:09 compute-0 python3.9[201142]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:09 compute-0 sudo[201140]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:09 compute-0 ceph-mon[74318]: pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:09.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:10 compute-0 sudo[201266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciabcixpotpihbyoiqgqhahkaaxochmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038808.760499-1646-169149851409800/AnsiballZ_copy.py'
Jan 21 23:40:10 compute-0 sudo[201266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:10 compute-0 python3.9[201268]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769038808.760499-1646-169149851409800/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:10 compute-0 sudo[201266]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:10 compute-0 ceph-mon[74318]: pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:10.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:10 compute-0 sudo[201418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkngdxmcbxkpuqsmjmahutrepnrjeajy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038810.6041877-1646-83573050084724/AnsiballZ_stat.py'
Jan 21 23:40:10 compute-0 sudo[201418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:11 compute-0 python3.9[201420]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:11 compute-0 sudo[201418]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:11 compute-0 sudo[201544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnuprzuctypydxfzpqcnvhbrhckhmvsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038810.6041877-1646-83573050084724/AnsiballZ_copy.py'
Jan 21 23:40:11 compute-0 sudo[201544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:11 compute-0 python3.9[201546]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769038810.6041877-1646-83573050084724/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:11.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:11 compute-0 sudo[201544]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:12 compute-0 sudo[201696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhvdginjjkfmwgyqebdndxronuukihsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038812.0419297-1646-12318985970981/AnsiballZ_stat.py'
Jan 21 23:40:12 compute-0 sudo[201696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:40:12 compute-0 python3.9[201698]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:12 compute-0 sudo[201696]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:40:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:12.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:40:13 compute-0 sudo[201821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqrgeloqppnrooiyuectbzocqrtscqyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038812.0419297-1646-12318985970981/AnsiballZ_copy.py'
Jan 21 23:40:13 compute-0 sudo[201821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:13 compute-0 ceph-mon[74318]: pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:13 compute-0 python3.9[201823]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769038812.0419297-1646-12318985970981/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:13 compute-0 sudo[201821]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:13.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:14 compute-0 sudo[201974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quysnvzyadbpbyckhyouvinvfptbmlcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038813.533255-1646-229320240929266/AnsiballZ_stat.py'
Jan 21 23:40:14 compute-0 sudo[201974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:14 compute-0 python3.9[201976]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:14 compute-0 sudo[201974]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:14 compute-0 sudo[202099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nidbfelcscpmnjpzbiexkpbfoatdjswt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038813.533255-1646-229320240929266/AnsiballZ_copy.py'
Jan 21 23:40:14 compute-0 sudo[202099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:14.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:14 compute-0 python3.9[202101]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769038813.533255-1646-229320240929266/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:14 compute-0 sudo[202099]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:15 compute-0 ceph-mon[74318]: pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:15 compute-0 sudo[202252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goimvhifuxzvzcmadmlkydxltbdlxzil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038815.1625211-1646-127361272229857/AnsiballZ_stat.py'
Jan 21 23:40:15 compute-0 sudo[202252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:15 compute-0 python3.9[202254]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:15 compute-0 sudo[202252]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:40:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:15.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:40:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:16 compute-0 sudo[202377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogzeidqtfpnkxdffypclcxyonlziupiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038815.1625211-1646-127361272229857/AnsiballZ_copy.py'
Jan 21 23:40:16 compute-0 sudo[202377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:16 compute-0 python3.9[202379]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769038815.1625211-1646-127361272229857/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:16 compute-0 sudo[202377]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:40:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:16.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:40:17 compute-0 sudo[202529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbvknawjdvxecubdadrnaysoobhannyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038816.6910217-1646-95396137206641/AnsiballZ_stat.py'
Jan 21 23:40:17 compute-0 sudo[202529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:17 compute-0 ceph-mon[74318]: pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:17 compute-0 python3.9[202531]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:17 compute-0 sudo[202529]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:40:17 compute-0 sudo[202655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlhscbpaddlipfxllhlvqsmqrtqqvaav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038816.6910217-1646-95396137206641/AnsiballZ_copy.py'
Jan 21 23:40:17 compute-0 sudo[202655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:17.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:18 compute-0 python3.9[202657]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769038816.6910217-1646-95396137206641/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:18 compute-0 sudo[202655]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:18 compute-0 sudo[202807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olqcqordgyeriiqlqkrfdzhrpujzrxyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038818.2266142-1646-62103229249930/AnsiballZ_stat.py'
Jan 21 23:40:18 compute-0 sudo[202807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:18 compute-0 python3.9[202809]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:18 compute-0 sudo[202807]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:18.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:19 compute-0 ceph-mon[74318]: pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:19 compute-0 sudo[202931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaghajvqyazwukpcanbmlbvnkfruwmrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038818.2266142-1646-62103229249930/AnsiballZ_copy.py'
Jan 21 23:40:19 compute-0 sudo[202931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:19 compute-0 python3.9[202933]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769038818.2266142-1646-62103229249930/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:19 compute-0 sudo[202931]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:19.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:19 compute-0 sudo[202999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:40:19 compute-0 sudo[202999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:19 compute-0 sudo[202999]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:20 compute-0 sudo[203043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:40:20 compute-0 sudo[203043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:20 compute-0 sudo[203043]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:20 compute-0 sudo[203084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:40:20 compute-0 sudo[203084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:20 compute-0 sudo[203084]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:20 compute-0 sudo[203132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 21 23:40:20 compute-0 sudo[203132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:20 compute-0 sudo[203182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsyraeilvjfqjswndcpryncfocyferfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038819.8090546-1646-184146580139792/AnsiballZ_stat.py'
Jan 21 23:40:20 compute-0 sudo[203182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:20 compute-0 sudo[203186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:40:20 compute-0 sudo[203186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:20 compute-0 sudo[203186]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:20 compute-0 sudo[203218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:40:20 compute-0 sudo[203218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:20 compute-0 sudo[203218]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:20 compute-0 python3.9[203185]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:20 compute-0 sudo[203182]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:20 compute-0 podman[203378]: 2026-01-21 23:40:20.799287993 +0000 UTC m=+0.083052185 container exec 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:40:20 compute-0 sudo[203449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyxhoxlupermlnnbjpkypxxrtfirctwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038819.8090546-1646-184146580139792/AnsiballZ_copy.py'
Jan 21 23:40:20 compute-0 sudo[203449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:40:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:20.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:40:20 compute-0 podman[203378]: 2026-01-21 23:40:20.926854326 +0000 UTC m=+0.210618488 container exec_died 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:40:21 compute-0 python3.9[203451]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769038819.8090546-1646-184146580139792/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:21 compute-0 sudo[203449]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:21 compute-0 podman[203471]: 2026-01-21 23:40:21.132425511 +0000 UTC m=+0.096972812 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 23:40:21 compute-0 ceph-mon[74318]: pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:21 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:40:21 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:40:21 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:40:21 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:40:21 compute-0 sudo[203739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdnlusezazofpaggkrwdtlbtxywornnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038821.3288674-1985-235937654198603/AnsiballZ_command.py'
Jan 21 23:40:21 compute-0 sudo[203739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:21 compute-0 podman[203751]: 2026-01-21 23:40:21.876867565 +0000 UTC m=+0.091721054 container exec fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 21 23:40:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:21.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:21 compute-0 podman[203751]: 2026-01-21 23:40:21.895969608 +0000 UTC m=+0.110823097 container exec_died fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 21 23:40:21 compute-0 python3.9[203750]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 21 23:40:21 compute-0 sudo[203739]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:22 compute-0 podman[203840]: 2026-01-21 23:40:22.138738726 +0000 UTC m=+0.066364400 container exec 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, io.openshift.tags=Ceph keepalived, release=1793, com.redhat.component=keepalived-container, name=keepalived, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9)
Jan 21 23:40:22 compute-0 podman[203840]: 2026-01-21 23:40:22.16348642 +0000 UTC m=+0.091112044 container exec_died 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, com.redhat.component=keepalived-container, distribution-scope=public, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, version=2.2.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 21 23:40:22 compute-0 sudo[203132]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:40:22 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:40:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:40:22 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:40:22 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:40:22 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:40:22 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:40:22 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:40:22 compute-0 sudo[203887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:40:22 compute-0 sudo[203887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:22 compute-0 sudo[203887]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:22 compute-0 sudo[203937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:40:22 compute-0 sudo[203937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:22 compute-0 sudo[203937]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:40:22 compute-0 sudo[203994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:40:22 compute-0 sudo[203994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:22 compute-0 sudo[203994]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:22 compute-0 sudo[204041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:40:22 compute-0 sudo[204041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:22 compute-0 sudo[204099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awquxdemxyueqcqgdjekfzywnnunwfjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038822.2774036-2012-128639468908347/AnsiballZ_file.py'
Jan 21 23:40:22 compute-0 sudo[204099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:22 compute-0 python3.9[204101]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:22 compute-0 sudo[204099]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:22.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:23 compute-0 sudo[204041]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:40:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:40:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:40:23 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:40:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:40:23 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:40:23 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 02f63898-a6a9-43a2-8b1f-be2c2eb6304d does not exist
Jan 21 23:40:23 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 67355271-d3f6-4617-9aef-47c4327f5b06 does not exist
Jan 21 23:40:23 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 9d327a5c-0e30-41cd-aca9-be289d47c773 does not exist
Jan 21 23:40:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:40:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:40:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:40:23 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:40:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:40:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:40:23 compute-0 sudo[204246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:40:23 compute-0 sudo[204246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:23 compute-0 sudo[204246]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:23 compute-0 sudo[204311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsidhpxbqfbbhtqhjvkhntxzxgudozwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038822.9564292-2012-230741033206072/AnsiballZ_file.py'
Jan 21 23:40:23 compute-0 sudo[204311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:23 compute-0 ceph-mon[74318]: pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:40:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:40:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:40:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:40:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:40:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:40:23 compute-0 sudo[204305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:40:23 compute-0 sudo[204305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:23 compute-0 sudo[204305]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:23 compute-0 sudo[204336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:40:23 compute-0 sudo[204336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:23 compute-0 sudo[204336]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:23 compute-0 sudo[204361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:40:23 compute-0 sudo[204361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:23 compute-0 python3.9[204329]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:23 compute-0 sudo[204311]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:23 compute-0 podman[204502]: 2026-01-21 23:40:23.826442672 +0000 UTC m=+0.062776925 container create 7e42348e79af0de34e0adcea918c0f9beb305d9aef1933da6533c4c200e43644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_beaver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 21 23:40:23 compute-0 systemd[1]: Started libpod-conmon-7e42348e79af0de34e0adcea918c0f9beb305d9aef1933da6533c4c200e43644.scope.
Jan 21 23:40:23 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:40:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:40:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:23.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:40:23 compute-0 podman[204502]: 2026-01-21 23:40:23.807817225 +0000 UTC m=+0.044151478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:40:23 compute-0 podman[204502]: 2026-01-21 23:40:23.907232194 +0000 UTC m=+0.143566487 container init 7e42348e79af0de34e0adcea918c0f9beb305d9aef1933da6533c4c200e43644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_beaver, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:40:23 compute-0 podman[204502]: 2026-01-21 23:40:23.91459569 +0000 UTC m=+0.150929933 container start 7e42348e79af0de34e0adcea918c0f9beb305d9aef1933da6533c4c200e43644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Jan 21 23:40:23 compute-0 podman[204502]: 2026-01-21 23:40:23.917719831 +0000 UTC m=+0.154054084 container attach 7e42348e79af0de34e0adcea918c0f9beb305d9aef1933da6533c4c200e43644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 23:40:23 compute-0 interesting_beaver[204541]: 167 167
Jan 21 23:40:23 compute-0 systemd[1]: libpod-7e42348e79af0de34e0adcea918c0f9beb305d9aef1933da6533c4c200e43644.scope: Deactivated successfully.
Jan 21 23:40:23 compute-0 conmon[204541]: conmon 7e42348e79af0de34e0a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e42348e79af0de34e0adcea918c0f9beb305d9aef1933da6533c4c200e43644.scope/container/memory.events
Jan 21 23:40:23 compute-0 podman[204502]: 2026-01-21 23:40:23.92269442 +0000 UTC m=+0.159028713 container died 7e42348e79af0de34e0adcea918c0f9beb305d9aef1933da6533c4c200e43644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_beaver, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 23:40:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef7d8b8bf7fbebf6e19540b3de3993fa3decd9714f7b786f54c65ba5a687ffc9-merged.mount: Deactivated successfully.
Jan 21 23:40:23 compute-0 podman[204502]: 2026-01-21 23:40:23.975092791 +0000 UTC m=+0.211427044 container remove 7e42348e79af0de34e0adcea918c0f9beb305d9aef1933da6533c4c200e43644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_beaver, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Jan 21 23:40:23 compute-0 systemd[1]: libpod-conmon-7e42348e79af0de34e0adcea918c0f9beb305d9aef1933da6533c4c200e43644.scope: Deactivated successfully.
Jan 21 23:40:24 compute-0 sudo[204610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpzvmzxwawxxsithguxgwyjjatyibnis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038823.6795719-2012-212597439357631/AnsiballZ_file.py'
Jan 21 23:40:24 compute-0 sudo[204610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:24 compute-0 podman[204618]: 2026-01-21 23:40:24.135407684 +0000 UTC m=+0.047416822 container create cc45968edd40230c300fef0fdb7806b39157db021aa5686d8933480a4c2af6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:40:24 compute-0 systemd[1]: Started libpod-conmon-cc45968edd40230c300fef0fdb7806b39157db021aa5686d8933480a4c2af6c4.scope.
Jan 21 23:40:24 compute-0 python3.9[204612]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:24 compute-0 podman[204618]: 2026-01-21 23:40:24.116374564 +0000 UTC m=+0.028383712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:40:24 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7d60e2e0db39f6ec51b09e0d65c20df2f1a9906f7035c5e3d6a4e66c2bc93e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7d60e2e0db39f6ec51b09e0d65c20df2f1a9906f7035c5e3d6a4e66c2bc93e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7d60e2e0db39f6ec51b09e0d65c20df2f1a9906f7035c5e3d6a4e66c2bc93e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7d60e2e0db39f6ec51b09e0d65c20df2f1a9906f7035c5e3d6a4e66c2bc93e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7d60e2e0db39f6ec51b09e0d65c20df2f1a9906f7035c5e3d6a4e66c2bc93e6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:40:24 compute-0 sudo[204610]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:24 compute-0 podman[204618]: 2026-01-21 23:40:24.236545839 +0000 UTC m=+0.148555057 container init cc45968edd40230c300fef0fdb7806b39157db021aa5686d8933480a4c2af6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goodall, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 23:40:24 compute-0 podman[204618]: 2026-01-21 23:40:24.243775401 +0000 UTC m=+0.155784569 container start cc45968edd40230c300fef0fdb7806b39157db021aa5686d8933480a4c2af6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 23:40:24 compute-0 podman[204618]: 2026-01-21 23:40:24.248158052 +0000 UTC m=+0.160167240 container attach cc45968edd40230c300fef0fdb7806b39157db021aa5686d8933480a4c2af6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goodall, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 21 23:40:24 compute-0 sudo[204789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slttapotkfkeayeounxgmouxrrqdnrxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038824.4098308-2012-256334891981935/AnsiballZ_file.py'
Jan 21 23:40:24 compute-0 sudo[204789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:24.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:25 compute-0 tender_goodall[204635]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:40:25 compute-0 tender_goodall[204635]: --> relative data size: 1.0
Jan 21 23:40:25 compute-0 tender_goodall[204635]: --> All data devices are unavailable
Jan 21 23:40:25 compute-0 python3.9[204791]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:25 compute-0 systemd[1]: libpod-cc45968edd40230c300fef0fdb7806b39157db021aa5686d8933480a4c2af6c4.scope: Deactivated successfully.
Jan 21 23:40:25 compute-0 sudo[204789]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:25 compute-0 podman[204802]: 2026-01-21 23:40:25.131312705 +0000 UTC m=+0.030376935 container died cc45968edd40230c300fef0fdb7806b39157db021aa5686d8933480a4c2af6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 23:40:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7d60e2e0db39f6ec51b09e0d65c20df2f1a9906f7035c5e3d6a4e66c2bc93e6-merged.mount: Deactivated successfully.
Jan 21 23:40:25 compute-0 podman[204802]: 2026-01-21 23:40:25.205446084 +0000 UTC m=+0.104510324 container remove cc45968edd40230c300fef0fdb7806b39157db021aa5686d8933480a4c2af6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goodall, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 23:40:25 compute-0 systemd[1]: libpod-conmon-cc45968edd40230c300fef0fdb7806b39157db021aa5686d8933480a4c2af6c4.scope: Deactivated successfully.
Jan 21 23:40:25 compute-0 sudo[204361]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:25 compute-0 sudo[204853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:40:25 compute-0 sudo[204853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:25 compute-0 sudo[204853]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:25 compute-0 ceph-mon[74318]: pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:25 compute-0 sudo[204911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:40:25 compute-0 sudo[204911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:25 compute-0 sudo[204911]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:25 compute-0 sudo[204952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:40:25 compute-0 sudo[204952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:25 compute-0 sudo[204952]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:25 compute-0 sudo[204993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:40:25 compute-0 sudo[204993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:25 compute-0 sudo[205067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldjvwzqxeiqwmgwujojhspoguyzhbkaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038825.2895286-2012-276411236213453/AnsiballZ_file.py'
Jan 21 23:40:25 compute-0 sudo[205067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:25 compute-0 python3.9[205071]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:40:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:25.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:40:25 compute-0 sudo[205067]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:25 compute-0 podman[205111]: 2026-01-21 23:40:25.923607364 +0000 UTC m=+0.052057338 container create 15c2b66a84003e00bfb0dc39d1115ee820b6c0d58ac0bd3621f4e1df5540f907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_boyd, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 23:40:25 compute-0 systemd[1]: Started libpod-conmon-15c2b66a84003e00bfb0dc39d1115ee820b6c0d58ac0bd3621f4e1df5540f907.scope.
Jan 21 23:40:25 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:40:25 compute-0 podman[205111]: 2026-01-21 23:40:25.896298672 +0000 UTC m=+0.024748636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:40:26 compute-0 podman[205111]: 2026-01-21 23:40:26.002501999 +0000 UTC m=+0.130951943 container init 15c2b66a84003e00bfb0dc39d1115ee820b6c0d58ac0bd3621f4e1df5540f907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 23:40:26 compute-0 podman[205111]: 2026-01-21 23:40:26.010427391 +0000 UTC m=+0.138877365 container start 15c2b66a84003e00bfb0dc39d1115ee820b6c0d58ac0bd3621f4e1df5540f907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_boyd, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:40:26 compute-0 podman[205111]: 2026-01-21 23:40:26.01464577 +0000 UTC m=+0.143095734 container attach 15c2b66a84003e00bfb0dc39d1115ee820b6c0d58ac0bd3621f4e1df5540f907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_boyd, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 23:40:26 compute-0 beautiful_boyd[205151]: 167 167
Jan 21 23:40:26 compute-0 systemd[1]: libpod-15c2b66a84003e00bfb0dc39d1115ee820b6c0d58ac0bd3621f4e1df5540f907.scope: Deactivated successfully.
Jan 21 23:40:26 compute-0 podman[205111]: 2026-01-21 23:40:26.016219927 +0000 UTC m=+0.144669931 container died 15c2b66a84003e00bfb0dc39d1115ee820b6c0d58ac0bd3621f4e1df5540f907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_boyd, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 23:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-879451a6b243f240b03ef11f77539dabd3a8b77ded6c0fd801179a064900179c-merged.mount: Deactivated successfully.
Jan 21 23:40:26 compute-0 podman[205111]: 2026-01-21 23:40:26.051959507 +0000 UTC m=+0.180409441 container remove 15c2b66a84003e00bfb0dc39d1115ee820b6c0d58ac0bd3621f4e1df5540f907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Jan 21 23:40:26 compute-0 systemd[1]: libpod-conmon-15c2b66a84003e00bfb0dc39d1115ee820b6c0d58ac0bd3621f4e1df5540f907.scope: Deactivated successfully.
Jan 21 23:40:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:26 compute-0 podman[205232]: 2026-01-21 23:40:26.244194608 +0000 UTC m=+0.071046647 container create efacfd989efc6bdba42cf2397f2148c4d21186bebe5b4606886a7d9c0e3ac970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_burnell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 21 23:40:26 compute-0 systemd[1]: Started libpod-conmon-efacfd989efc6bdba42cf2397f2148c4d21186bebe5b4606886a7d9c0e3ac970.scope.
Jan 21 23:40:26 compute-0 podman[205232]: 2026-01-21 23:40:26.213514233 +0000 UTC m=+0.040366322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:40:26 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfda1fafc99da5cc83204452751ff0f3f678b547d7410f0b87f77c77f3932c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfda1fafc99da5cc83204452751ff0f3f678b547d7410f0b87f77c77f3932c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfda1fafc99da5cc83204452751ff0f3f678b547d7410f0b87f77c77f3932c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfda1fafc99da5cc83204452751ff0f3f678b547d7410f0b87f77c77f3932c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:40:26 compute-0 podman[205232]: 2026-01-21 23:40:26.355018436 +0000 UTC m=+0.181870515 container init efacfd989efc6bdba42cf2397f2148c4d21186bebe5b4606886a7d9c0e3ac970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:40:26 compute-0 podman[205232]: 2026-01-21 23:40:26.36334017 +0000 UTC m=+0.190192199 container start efacfd989efc6bdba42cf2397f2148c4d21186bebe5b4606886a7d9c0e3ac970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:40:26 compute-0 podman[205232]: 2026-01-21 23:40:26.367314262 +0000 UTC m=+0.194166271 container attach efacfd989efc6bdba42cf2397f2148c4d21186bebe5b4606886a7d9c0e3ac970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_burnell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:40:26 compute-0 sudo[205320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deyfzgthoisottsslmqhsatmabasmsdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038826.0506065-2012-180686537360895/AnsiballZ_file.py'
Jan 21 23:40:26 compute-0 sudo[205320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:26 compute-0 python3.9[205322]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:26 compute-0 sudo[205320]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:40:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:26.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:40:27 compute-0 sudo[205474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rimtcctqxiopxutmuonruhvgdxfqnajb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038826.8016355-2012-20374840118566/AnsiballZ_file.py'
Jan 21 23:40:27 compute-0 sudo[205474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:27 compute-0 serene_burnell[205289]: {
Jan 21 23:40:27 compute-0 serene_burnell[205289]:     "1": [
Jan 21 23:40:27 compute-0 serene_burnell[205289]:         {
Jan 21 23:40:27 compute-0 serene_burnell[205289]:             "devices": [
Jan 21 23:40:27 compute-0 serene_burnell[205289]:                 "/dev/loop3"
Jan 21 23:40:27 compute-0 serene_burnell[205289]:             ],
Jan 21 23:40:27 compute-0 serene_burnell[205289]:             "lv_name": "ceph_lv0",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:             "lv_size": "7511998464",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:             "name": "ceph_lv0",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:             "tags": {
Jan 21 23:40:27 compute-0 serene_burnell[205289]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:                 "ceph.cluster_name": "ceph",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:                 "ceph.crush_device_class": "",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:                 "ceph.encrypted": "0",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:                 "ceph.osd_id": "1",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:                 "ceph.type": "block",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:                 "ceph.vdo": "0"
Jan 21 23:40:27 compute-0 serene_burnell[205289]:             },
Jan 21 23:40:27 compute-0 serene_burnell[205289]:             "type": "block",
Jan 21 23:40:27 compute-0 serene_burnell[205289]:             "vg_name": "ceph_vg0"
Jan 21 23:40:27 compute-0 serene_burnell[205289]:         }
Jan 21 23:40:27 compute-0 serene_burnell[205289]:     ]
Jan 21 23:40:27 compute-0 serene_burnell[205289]: }
Jan 21 23:40:27 compute-0 systemd[1]: libpod-efacfd989efc6bdba42cf2397f2148c4d21186bebe5b4606886a7d9c0e3ac970.scope: Deactivated successfully.
Jan 21 23:40:27 compute-0 podman[205232]: 2026-01-21 23:40:27.135781119 +0000 UTC m=+0.962633128 container died efacfd989efc6bdba42cf2397f2148c4d21186bebe5b4606886a7d9c0e3ac970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_burnell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 21 23:40:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cfda1fafc99da5cc83204452751ff0f3f678b547d7410f0b87f77c77f3932c4-merged.mount: Deactivated successfully.
Jan 21 23:40:27 compute-0 podman[205232]: 2026-01-21 23:40:27.190756665 +0000 UTC m=+1.017608664 container remove efacfd989efc6bdba42cf2397f2148c4d21186bebe5b4606886a7d9c0e3ac970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:40:27 compute-0 systemd[1]: libpod-conmon-efacfd989efc6bdba42cf2397f2148c4d21186bebe5b4606886a7d9c0e3ac970.scope: Deactivated successfully.
Jan 21 23:40:27 compute-0 sudo[204993]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:27 compute-0 python3.9[205476]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:27 compute-0 sudo[205474]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:27 compute-0 sudo[205490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:40:27 compute-0 sudo[205490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:27 compute-0 sudo[205490]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:27 compute-0 sudo[205515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:40:27 compute-0 sudo[205515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:27 compute-0 sudo[205515]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:27 compute-0 ceph-mon[74318]: pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:27 compute-0 sudo[205564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:40:27 compute-0 sudo[205564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:27 compute-0 sudo[205564]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:27 compute-0 sudo[205610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:40:27 compute-0 sudo[205610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:40:27 compute-0 sudo[205770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iewszwottjarkbnzownnyrrhadukqoxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038827.3954096-2012-71358629757249/AnsiballZ_file.py'
Jan 21 23:40:27 compute-0 sudo[205770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:27 compute-0 podman[205785]: 2026-01-21 23:40:27.753172752 +0000 UTC m=+0.060965210 container create 5d74b9a1b39ecfd649ce12aafa164b89f3ac8b50574403291a11b10e661cfca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_torvalds, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 21 23:40:27 compute-0 systemd[1]: Started libpod-conmon-5d74b9a1b39ecfd649ce12aafa164b89f3ac8b50574403291a11b10e661cfca5.scope.
Jan 21 23:40:27 compute-0 podman[205785]: 2026-01-21 23:40:27.723800956 +0000 UTC m=+0.031593524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:40:27 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:40:27 compute-0 podman[205785]: 2026-01-21 23:40:27.840950357 +0000 UTC m=+0.148742845 container init 5d74b9a1b39ecfd649ce12aafa164b89f3ac8b50574403291a11b10e661cfca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Jan 21 23:40:27 compute-0 podman[205785]: 2026-01-21 23:40:27.847794446 +0000 UTC m=+0.155586914 container start 5d74b9a1b39ecfd649ce12aafa164b89f3ac8b50574403291a11b10e661cfca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_torvalds, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 21 23:40:27 compute-0 podman[205785]: 2026-01-21 23:40:27.851283423 +0000 UTC m=+0.159075901 container attach 5d74b9a1b39ecfd649ce12aafa164b89f3ac8b50574403291a11b10e661cfca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_torvalds, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:40:27 compute-0 tender_torvalds[205802]: 167 167
Jan 21 23:40:27 compute-0 systemd[1]: libpod-5d74b9a1b39ecfd649ce12aafa164b89f3ac8b50574403291a11b10e661cfca5.scope: Deactivated successfully.
Jan 21 23:40:27 compute-0 podman[205785]: 2026-01-21 23:40:27.857632756 +0000 UTC m=+0.165425254 container died 5d74b9a1b39ecfd649ce12aafa164b89f3ac8b50574403291a11b10e661cfca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_torvalds, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 23:40:27 compute-0 python3.9[205782]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:27 compute-0 sudo[205770]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e978945d362a47e94050122fc83bfd0783614a0cd887110825aaa25b429a2fe-merged.mount: Deactivated successfully.
Jan 21 23:40:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:27.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:27 compute-0 podman[205785]: 2026-01-21 23:40:27.913071586 +0000 UTC m=+0.220864044 container remove 5d74b9a1b39ecfd649ce12aafa164b89f3ac8b50574403291a11b10e661cfca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 23:40:27 compute-0 systemd[1]: libpod-conmon-5d74b9a1b39ecfd649ce12aafa164b89f3ac8b50574403291a11b10e661cfca5.scope: Deactivated successfully.
Jan 21 23:40:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:28 compute-0 podman[205871]: 2026-01-21 23:40:28.088534746 +0000 UTC m=+0.042469536 container create fb503a313c2b8f299b434f9a92a7381341fb809f69461b6c323c4c55a0656ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 21 23:40:28 compute-0 systemd[1]: Started libpod-conmon-fb503a313c2b8f299b434f9a92a7381341fb809f69461b6c323c4c55a0656ce2.scope.
Jan 21 23:40:28 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:40:28 compute-0 podman[205871]: 2026-01-21 23:40:28.071228708 +0000 UTC m=+0.025163518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d7e0dfe6fcad00859608940454dd02a363dee3290a25d7923681329f240a61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d7e0dfe6fcad00859608940454dd02a363dee3290a25d7923681329f240a61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d7e0dfe6fcad00859608940454dd02a363dee3290a25d7923681329f240a61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d7e0dfe6fcad00859608940454dd02a363dee3290a25d7923681329f240a61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:40:28 compute-0 podman[205871]: 2026-01-21 23:40:28.194643661 +0000 UTC m=+0.148578471 container init fb503a313c2b8f299b434f9a92a7381341fb809f69461b6c323c4c55a0656ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:40:28 compute-0 podman[205871]: 2026-01-21 23:40:28.201661704 +0000 UTC m=+0.155596534 container start fb503a313c2b8f299b434f9a92a7381341fb809f69461b6c323c4c55a0656ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:40:28 compute-0 podman[205871]: 2026-01-21 23:40:28.205291215 +0000 UTC m=+0.159226025 container attach fb503a313c2b8f299b434f9a92a7381341fb809f69461b6c323c4c55a0656ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 23:40:28 compute-0 sudo[205994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uouhuchdjbwchekybmlvlwhavbyrsdxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038828.0287313-2012-108691929072496/AnsiballZ_file.py'
Jan 21 23:40:28 compute-0 sudo[205994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:28 compute-0 python3.9[205996]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:28 compute-0 sudo[205994]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:28.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:29 compute-0 distracted_kalam[205929]: {
Jan 21 23:40:29 compute-0 distracted_kalam[205929]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:40:29 compute-0 distracted_kalam[205929]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:40:29 compute-0 distracted_kalam[205929]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:40:29 compute-0 distracted_kalam[205929]:         "osd_id": 1,
Jan 21 23:40:29 compute-0 distracted_kalam[205929]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:40:29 compute-0 distracted_kalam[205929]:         "type": "bluestore"
Jan 21 23:40:29 compute-0 distracted_kalam[205929]:     }
Jan 21 23:40:29 compute-0 distracted_kalam[205929]: }
Jan 21 23:40:29 compute-0 sudo[206160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkjwyuhkymddmubyzscddxnttubupkmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038828.7777896-2012-80868047229437/AnsiballZ_file.py'
Jan 21 23:40:29 compute-0 sudo[206160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:29 compute-0 systemd[1]: libpod-fb503a313c2b8f299b434f9a92a7381341fb809f69461b6c323c4c55a0656ce2.scope: Deactivated successfully.
Jan 21 23:40:29 compute-0 podman[205871]: 2026-01-21 23:40:29.135230026 +0000 UTC m=+1.089164846 container died fb503a313c2b8f299b434f9a92a7381341fb809f69461b6c323c4c55a0656ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:40:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2d7e0dfe6fcad00859608940454dd02a363dee3290a25d7923681329f240a61-merged.mount: Deactivated successfully.
Jan 21 23:40:29 compute-0 podman[205871]: 2026-01-21 23:40:29.232267904 +0000 UTC m=+1.186202734 container remove fb503a313c2b8f299b434f9a92a7381341fb809f69461b6c323c4c55a0656ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:40:29 compute-0 systemd[1]: libpod-conmon-fb503a313c2b8f299b434f9a92a7381341fb809f69461b6c323c4c55a0656ce2.scope: Deactivated successfully.
Jan 21 23:40:29 compute-0 sudo[205610]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:40:29 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:40:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:40:29 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:40:29 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 9204cc0d-2159-40c8-99c4-6f575eb8f2e7 does not exist
Jan 21 23:40:29 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 7dcee670-bdab-4361-b094-74a4e54c01a4 does not exist
Jan 21 23:40:29 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 670acae5-814d-48be-a12d-938144f0645e does not exist
Jan 21 23:40:29 compute-0 python3.9[206164]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:29 compute-0 sudo[206160]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:29 compute-0 ceph-mon[74318]: pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:29 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:40:29 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:40:29 compute-0 sudo[206179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:40:29 compute-0 sudo[206179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:29 compute-0 sudo[206179]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:29 compute-0 sudo[206215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:40:29 compute-0 sudo[206215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:29 compute-0 sudo[206215]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:29 compute-0 sudo[206379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxnrjuatstvlcbebokczyepdvhtjdzuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038829.5413985-2012-275021963347168/AnsiballZ_file.py'
Jan 21 23:40:29 compute-0 sudo[206379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:29.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:30 compute-0 python3.9[206381]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:30 compute-0 sudo[206379]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:30 compute-0 ceph-mon[74318]: pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:30 compute-0 sudo[206531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijrpyqemsvwhnmqukgttgnjmkolqfira ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038830.2330065-2012-272439245064459/AnsiballZ_file.py'
Jan 21 23:40:30 compute-0 sudo[206531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:30 compute-0 python3.9[206533]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:30 compute-0 sudo[206531]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:30.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:31 compute-0 sudo[206683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwucwtwbnjoswucsvlzwmzehynpwlwkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038830.952545-2012-165373389835009/AnsiballZ_file.py'
Jan 21 23:40:31 compute-0 sudo[206683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:31 compute-0 python3.9[206685]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:31 compute-0 sudo[206683]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:31.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:32 compute-0 sudo[206836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aekgcomkhiiktxzwjlzcwofsjwomzgfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038831.717039-2012-280948223780042/AnsiballZ_file.py'
Jan 21 23:40:32 compute-0 sudo[206836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:32 compute-0 python3.9[206838]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:32 compute-0 sudo[206836]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:32.479895) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038832479963, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 3002, "num_deletes": 503, "total_data_size": 5439294, "memory_usage": 5520720, "flush_reason": "Manual Compaction"}
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038832535874, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 5325262, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12335, "largest_seqno": 15336, "table_properties": {"data_size": 5312455, "index_size": 8149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3589, "raw_key_size": 26512, "raw_average_key_size": 18, "raw_value_size": 5285248, "raw_average_value_size": 3737, "num_data_blocks": 364, "num_entries": 1414, "num_filter_entries": 1414, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769038521, "oldest_key_time": 1769038521, "file_creation_time": 1769038832, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 56039 microseconds, and 20853 cpu microseconds.
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:32.535942) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 5325262 bytes OK
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:32.535965) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:32.537840) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:32.537871) EVENT_LOG_v1 {"time_micros": 1769038832537864, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:32.537890) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 5426705, prev total WAL file size 5426705, number of live WAL files 2.
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:32.540284) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(5200KB)], [29(8227KB)]
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038832540461, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 13750721, "oldest_snapshot_seqno": -1}
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4404 keys, 11410536 bytes, temperature: kUnknown
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038832658854, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 11410536, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11375371, "index_size": 23023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 107786, "raw_average_key_size": 24, "raw_value_size": 11290189, "raw_average_value_size": 2563, "num_data_blocks": 975, "num_entries": 4404, "num_filter_entries": 4404, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769038832, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:32.659133) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11410536 bytes
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:32.660777) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.1 rd, 96.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.1, 8.0 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(4.7) write-amplify(2.1) OK, records in: 5429, records dropped: 1025 output_compression: NoCompression
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:32.660807) EVENT_LOG_v1 {"time_micros": 1769038832660792, "job": 12, "event": "compaction_finished", "compaction_time_micros": 118467, "compaction_time_cpu_micros": 48390, "output_level": 6, "num_output_files": 1, "total_output_size": 11410536, "num_input_records": 5429, "num_output_records": 4404, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038832662656, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038832665304, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:32.539978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:32.665446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:32.665456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:32.665460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:32.665463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:40:32 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:32.665466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:40:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:40:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:32.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:40:33 compute-0 ceph-mon[74318]: pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:33 compute-0 sudo[206989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnfqijeeidlsueuxsmvekwwxumfgcctz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038833.2995222-2309-33398597019908/AnsiballZ_stat.py'
Jan 21 23:40:33 compute-0 sudo[206989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:33.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:33 compute-0 python3.9[206991]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:33 compute-0 sudo[206989]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:34 compute-0 sudo[207112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxokfrjxfbhpusgpjodnlqafnoszgxcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038833.2995222-2309-33398597019908/AnsiballZ_copy.py'
Jan 21 23:40:34 compute-0 sudo[207112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:34 compute-0 python3.9[207114]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038833.2995222-2309-33398597019908/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:34 compute-0 sudo[207112]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:40:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:34.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:40:35 compute-0 podman[207146]: 2026-01-21 23:40:35.095949768 +0000 UTC m=+0.196619135 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 21 23:40:35 compute-0 ceph-mon[74318]: pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:35 compute-0 sudo[207287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxgpkqdgfshoiwiqbfxxufawvepfmcwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038834.9083693-2309-145692672612998/AnsiballZ_stat.py'
Jan 21 23:40:35 compute-0 sudo[207287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:35 compute-0 python3.9[207289]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:35 compute-0 sudo[207287]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:35.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:36 compute-0 sudo[207411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwfznoyyaivbhwebkazxiziddoycldxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038834.9083693-2309-145692672612998/AnsiballZ_copy.py'
Jan 21 23:40:36 compute-0 sudo[207411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:36 compute-0 python3.9[207413]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038834.9083693-2309-145692672612998/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:36 compute-0 sudo[207411]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:40:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:36.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:40:37 compute-0 sudo[207563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zevrjgnioefcpbpxhqbxcsfglfgsqtgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038836.5728436-2309-260596370124217/AnsiballZ_stat.py'
Jan 21 23:40:37 compute-0 sudo[207563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:37 compute-0 python3.9[207565]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:37 compute-0 sudo[207563]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:37 compute-0 ceph-mon[74318]: pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:40:37 compute-0 sudo[207687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkngvhaytpcrdgbsudpnykqvyhsxpuda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038836.5728436-2309-260596370124217/AnsiballZ_copy.py'
Jan 21 23:40:37 compute-0 sudo[207687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:37 compute-0 python3.9[207689]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038836.5728436-2309-260596370124217/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:37.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:37 compute-0 sudo[207687]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:38 compute-0 sudo[207839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekmtgjdiiklacwbuoqpoeaextiyxpetq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038838.0939512-2309-79171642888045/AnsiballZ_stat.py'
Jan 21 23:40:38 compute-0 ceph-mon[74318]: pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:38 compute-0 sudo[207839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:38 compute-0 python3.9[207841]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:38 compute-0 sudo[207839]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:38.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:39 compute-0 sudo[207962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhyxgyppnbdxhbjqsfeegsudaurywkze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038838.0939512-2309-79171642888045/AnsiballZ_copy.py'
Jan 21 23:40:39 compute-0 sudo[207962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:40:39
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['.rgw.root', 'vms', '.mgr', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta']
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:40:39 compute-0 python3.9[207964]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038838.0939512-2309-79171642888045/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:40:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:40:39 compute-0 sudo[207962]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:39.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:39 compute-0 sudo[208115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqznhrvkqkiuwpjmhugepnnjtrxuqoao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038839.5681005-2309-152363662460349/AnsiballZ_stat.py'
Jan 21 23:40:39 compute-0 sudo[208115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:40 compute-0 python3.9[208117]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:40 compute-0 sudo[208115]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:40 compute-0 sudo[208186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:40:40 compute-0 sudo[208186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:40 compute-0 sudo[208186]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:40 compute-0 sudo[208230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:40:40 compute-0 sudo[208230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:40:40 compute-0 sudo[208230]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:40 compute-0 sudo[208288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsleinuldrqdzcudkjpxwxtowsdpvtxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038839.5681005-2309-152363662460349/AnsiballZ_copy.py'
Jan 21 23:40:40 compute-0 sudo[208288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:40 compute-0 python3.9[208290]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038839.5681005-2309-152363662460349/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:40 compute-0 sudo[208288]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:40.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:41 compute-0 ceph-mon[74318]: pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:41 compute-0 sudo[208440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpwbgxvvubzatufynkhrmyjeucomqwjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038840.8933816-2309-9612778091434/AnsiballZ_stat.py'
Jan 21 23:40:41 compute-0 sudo[208440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:41 compute-0 python3.9[208442]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:41 compute-0 sudo[208440]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:41 compute-0 sudo[208564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfwyoctqospuozqcgvkzzpfjdezwycse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038840.8933816-2309-9612778091434/AnsiballZ_copy.py'
Jan 21 23:40:41 compute-0 sudo[208564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:41.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:42 compute-0 python3.9[208566]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038840.8933816-2309-9612778091434/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:42 compute-0 sudo[208564]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:40:42 compute-0 sudo[208716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diualmdvwggbvggbdkqnpymuymqxoaiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038842.2274866-2309-5539262924633/AnsiballZ_stat.py'
Jan 21 23:40:42 compute-0 sudo[208716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:42 compute-0 python3.9[208718]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:42 compute-0 sudo[208716]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:42.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:43 compute-0 sudo[208839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exhccnxzmnxokmvimspawiwbfypkcbre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038842.2274866-2309-5539262924633/AnsiballZ_copy.py'
Jan 21 23:40:43 compute-0 sudo[208839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:43 compute-0 ceph-mon[74318]: pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:43 compute-0 python3.9[208841]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038842.2274866-2309-5539262924633/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:43 compute-0 sudo[208839]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:43 compute-0 sudo[208992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdowunjttnqgndnuujfsuptuywrmakev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038843.5226254-2309-263997759543905/AnsiballZ_stat.py'
Jan 21 23:40:43 compute-0 sudo[208992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:40:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:43.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:40:44 compute-0 python3.9[208994]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:44 compute-0 sudo[208992]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:44 compute-0 sudo[209115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsnqcahheayyvpugbknaicaamrjrxwoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038843.5226254-2309-263997759543905/AnsiballZ_copy.py'
Jan 21 23:40:44 compute-0 sudo[209115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:44 compute-0 python3.9[209117]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038843.5226254-2309-263997759543905/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:44 compute-0 sudo[209115]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:44.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:45 compute-0 ceph-mon[74318]: pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:45 compute-0 sudo[209267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moghlbawzrbtdzgqdhkssvdysodjyjyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038844.9874644-2309-19969704481561/AnsiballZ_stat.py'
Jan 21 23:40:45 compute-0 sudo[209267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:45 compute-0 python3.9[209269]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:45 compute-0 sudo[209267]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:45.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:45 compute-0 sudo[209391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbhrjcgjhgaouvfhchkvskjvkbqjblfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038844.9874644-2309-19969704481561/AnsiballZ_copy.py'
Jan 21 23:40:45 compute-0 sudo[209391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:46 compute-0 python3.9[209393]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038844.9874644-2309-19969704481561/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:46 compute-0 sudo[209391]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:46 compute-0 sudo[209543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdlpmcsalqqrqvirdyrxqbrcsuwaiaqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038846.3349383-2309-248586663880042/AnsiballZ_stat.py'
Jan 21 23:40:46 compute-0 sudo[209543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:46.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:46 compute-0 python3.9[209545]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:46 compute-0 sudo[209543]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:47 compute-0 ceph-mon[74318]: pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:47 compute-0 sudo[209666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpsrdscskxwwudqncbckjrorznivfpsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038846.3349383-2309-248586663880042/AnsiballZ_copy.py'
Jan 21 23:40:47 compute-0 sudo[209666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:40:47 compute-0 python3.9[209668]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038846.3349383-2309-248586663880042/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:47 compute-0 sudo[209666]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:47.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:48 compute-0 sudo[209819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhtilawxfnmncyqnqxsgncingvadklcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038847.7462182-2309-136213570277449/AnsiballZ_stat.py'
Jan 21 23:40:48 compute-0 sudo[209819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:48 compute-0 python3.9[209821]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:48 compute-0 sudo[209819]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:40:48.734 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:40:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:40:48.736 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:40:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:40:48.736 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:40:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:48.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:48 compute-0 sudo[209942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smmqvnkoxfxluqpmwzulvwwnlhcrjqef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038847.7462182-2309-136213570277449/AnsiballZ_copy.py'
Jan 21 23:40:48 compute-0 sudo[209942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:49 compute-0 python3.9[209944]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038847.7462182-2309-136213570277449/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:49 compute-0 sudo[209942]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:49 compute-0 ceph-mon[74318]: pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:49 compute-0 sudo[210095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcmotwnydftqgczpziwzciyyufxchlyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038849.3331177-2309-147589189287968/AnsiballZ_stat.py'
Jan 21 23:40:49 compute-0 sudo[210095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:40:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:49.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:40:49 compute-0 python3.9[210097]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:49 compute-0 sudo[210095]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:50 compute-0 sudo[210218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnscnuhcqcqysbyshkkwgpdghmttckja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038849.3331177-2309-147589189287968/AnsiballZ_copy.py'
Jan 21 23:40:50 compute-0 sudo[210218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:50 compute-0 python3.9[210220]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038849.3331177-2309-147589189287968/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:50 compute-0 sudo[210218]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:40:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:50.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:40:51 compute-0 sudo[210370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyrpqtpkzeixaqeucynwazzqljtdhjcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038850.816976-2309-171288291052008/AnsiballZ_stat.py'
Jan 21 23:40:51 compute-0 sudo[210370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:51 compute-0 ceph-mon[74318]: pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:51 compute-0 python3.9[210372]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:51 compute-0 sudo[210370]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:51 compute-0 sudo[210504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klzpayuwtidkiwsppzxrgdroaiajltbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038850.816976-2309-171288291052008/AnsiballZ_copy.py'
Jan 21 23:40:51 compute-0 sudo[210504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:51 compute-0 podman[210468]: 2026-01-21 23:40:51.849132549 +0000 UTC m=+0.069261733 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 23:40:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:51.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:52 compute-0 python3.9[210515]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038850.816976-2309-171288291052008/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:52 compute-0 sudo[210504]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:40:52 compute-0 sudo[210665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsngwnepefmnldsuatftkzvtznslnaif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038852.1984682-2309-99727972708690/AnsiballZ_stat.py'
Jan 21 23:40:52 compute-0 sudo[210665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:52 compute-0 python3.9[210667]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:40:52 compute-0 sudo[210665]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:40:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:52.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:40:53 compute-0 ceph-mon[74318]: pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:53 compute-0 sudo[210789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-useecdbmukttjgukbrtlizwnbktiinlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038852.1984682-2309-99727972708690/AnsiballZ_copy.py'
Jan 21 23:40:53 compute-0 sudo[210789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:53 compute-0 python3.9[210791]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038852.1984682-2309-99727972708690/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:53 compute-0 sudo[210789]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:53.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:40:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:40:54 compute-0 python3.9[210941]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:40:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:40:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:54.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:40:55 compute-0 sudo[211094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdkxarrvtbgfcpkmofgzqjmjpmjhoobq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038854.6711557-2927-43632472453712/AnsiballZ_seboolean.py'
Jan 21 23:40:55 compute-0 sudo[211094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:55 compute-0 ceph-mon[74318]: pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:55 compute-0 python3.9[211096]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 21 23:40:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:40:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:55.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:40:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:56 compute-0 ceph-mon[74318]: pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:56 compute-0 sudo[211094]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:40:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:56.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:40:57 compute-0 sudo[211253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkhryekiqhsclivhajsznkhzduqskzkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038856.836373-2951-188835328953113/AnsiballZ_copy.py'
Jan 21 23:40:57 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 21 23:40:57 compute-0 sudo[211253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:57 compute-0 python3.9[211255]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:57 compute-0 sudo[211253]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:57.501217) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038857501291, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 435, "num_deletes": 250, "total_data_size": 399955, "memory_usage": 407672, "flush_reason": "Manual Compaction"}
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038857506550, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 303970, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15338, "largest_seqno": 15771, "table_properties": {"data_size": 301586, "index_size": 484, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6137, "raw_average_key_size": 19, "raw_value_size": 296840, "raw_average_value_size": 939, "num_data_blocks": 22, "num_entries": 316, "num_filter_entries": 316, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769038833, "oldest_key_time": 1769038833, "file_creation_time": 1769038857, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 5423 microseconds, and 2877 cpu microseconds.
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:57.506650) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 303970 bytes OK
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:57.506670) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:57.508456) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:57.508481) EVENT_LOG_v1 {"time_micros": 1769038857508474, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:57.508503) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 397344, prev total WAL file size 397344, number of live WAL files 2.
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:57.509352) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(296KB)], [32(10MB)]
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038857509451, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 11714506, "oldest_snapshot_seqno": -1}
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4218 keys, 7983519 bytes, temperature: kUnknown
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038857591372, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7983519, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7953997, "index_size": 17892, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 104317, "raw_average_key_size": 24, "raw_value_size": 7876366, "raw_average_value_size": 1867, "num_data_blocks": 751, "num_entries": 4218, "num_filter_entries": 4218, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769038857, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:57.591698) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7983519 bytes
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:57.593126) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.8 rd, 97.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.9 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(64.8) write-amplify(26.3) OK, records in: 4720, records dropped: 502 output_compression: NoCompression
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:57.593146) EVENT_LOG_v1 {"time_micros": 1769038857593135, "job": 14, "event": "compaction_finished", "compaction_time_micros": 82013, "compaction_time_cpu_micros": 48609, "output_level": 6, "num_output_files": 1, "total_output_size": 7983519, "num_input_records": 4720, "num_output_records": 4218, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038857593347, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038857595187, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:57.509227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:57.595244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:57.595252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:57.595255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:57.595258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:40:57 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:40:57.595261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:40:57 compute-0 sudo[211406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btzhkalfxpyezdayyvdgitkxnfyguuxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038857.5235271-2951-198332238430395/AnsiballZ_copy.py'
Jan 21 23:40:57 compute-0 sudo[211406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:57.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:58 compute-0 python3.9[211408]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:58 compute-0 sudo[211406]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:58 compute-0 ceph-mon[74318]: pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:40:58 compute-0 sudo[211558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcbmevyzdsfuitcjkdpxrcedervgumur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038858.2425168-2951-256398754829798/AnsiballZ_copy.py'
Jan 21 23:40:58 compute-0 sudo[211558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:58 compute-0 python3.9[211560]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:58 compute-0 sudo[211558]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:40:58.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:40:59 compute-0 sudo[211710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckplrubrezwhophhqzwqpysewnxfakne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038858.9831295-2951-160599481634347/AnsiballZ_copy.py'
Jan 21 23:40:59 compute-0 sudo[211710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:40:59 compute-0 python3.9[211713]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:40:59 compute-0 sudo[211710]: pam_unix(sudo:session): session closed for user root
Jan 21 23:40:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:40:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:40:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:40:59.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:00 compute-0 sudo[211863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yibaviartuzzskbvrsvgtpeuwfxltihs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038859.7121475-2951-34514208584/AnsiballZ_copy.py'
Jan 21 23:41:00 compute-0 sudo[211863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:00 compute-0 python3.9[211865]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:00 compute-0 sudo[211863]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:00 compute-0 sudo[211902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:41:00 compute-0 sudo[211902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:00 compute-0 sudo[211902]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:00 compute-0 sudo[211959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:41:00 compute-0 sudo[211959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:00 compute-0 sudo[211959]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:00.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:00 compute-0 sudo[212065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuzflqmfozlructbsluljvmwyznqoivw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038860.5860171-3059-221797535680544/AnsiballZ_copy.py'
Jan 21 23:41:00 compute-0 sudo[212065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:01 compute-0 python3.9[212067]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:01 compute-0 sudo[212065]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:01 compute-0 ceph-mon[74318]: pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:01 compute-0 sudo[212218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmomsgifdpyvzgvztqckumcfvqmyfxok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038861.3151114-3059-241376912866567/AnsiballZ_copy.py'
Jan 21 23:41:01 compute-0 sudo[212218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:01 compute-0 python3.9[212220]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:01 compute-0 sudo[212218]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:01.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:02 compute-0 sudo[212370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acrdgzhorwlysfyfxewnhmjovusbyfbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038862.1527464-3059-35236205227177/AnsiballZ_copy.py'
Jan 21 23:41:02 compute-0 sudo[212370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:41:02 compute-0 python3.9[212372]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:02 compute-0 sudo[212370]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:02.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:03 compute-0 ceph-mon[74318]: pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:03 compute-0 sudo[212522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moronjrqklqychshlvicdmlxdwqurwqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038862.8923151-3059-130076296351772/AnsiballZ_copy.py'
Jan 21 23:41:03 compute-0 sudo[212522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:03 compute-0 python3.9[212524]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:03 compute-0 sudo[212522]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:03 compute-0 sudo[212675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryrlzzowaudowtldjjsuhgfjfiwqbijq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038863.5887334-3059-153596548048898/AnsiballZ_copy.py'
Jan 21 23:41:03 compute-0 sudo[212675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:03.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:04 compute-0 python3.9[212677]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:04 compute-0 sudo[212675]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:04.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:04 compute-0 sudo[212827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osgzqfsqltunjbdfssswcjoyfpxlthlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038864.4783683-3167-20909134833305/AnsiballZ_systemd.py'
Jan 21 23:41:04 compute-0 sudo[212827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:05 compute-0 ceph-mon[74318]: pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:05 compute-0 python3.9[212829]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:41:05 compute-0 systemd[1]: Reloading.
Jan 21 23:41:05 compute-0 systemd-rc-local-generator[212875]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:41:05 compute-0 systemd-sysv-generator[212879]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:41:05 compute-0 podman[212831]: 2026-01-21 23:41:05.490894231 +0000 UTC m=+0.210520949 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 23:41:05 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 21 23:41:05 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 21 23:41:05 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 21 23:41:05 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 21 23:41:05 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 21 23:41:05 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 21 23:41:05 compute-0 sudo[212827]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:41:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:05.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:41:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:06 compute-0 sudo[213046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxxmuhysguhcadqgayjqfcedjncvprga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038866.037608-3167-207001977209439/AnsiballZ_systemd.py'
Jan 21 23:41:06 compute-0 sudo[213046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:06 compute-0 python3.9[213048]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:41:06 compute-0 systemd[1]: Reloading.
Jan 21 23:41:06 compute-0 systemd-rc-local-generator[213076]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:41:06 compute-0 systemd-sysv-generator[213080]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:41:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:06.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:07 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 21 23:41:07 compute-0 ceph-mon[74318]: pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:07 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 21 23:41:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:41:07 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 21 23:41:07 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 21 23:41:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:07.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:08 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 21 23:41:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:08 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 21 23:41:08 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 21 23:41:08 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 21 23:41:08 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 21 23:41:08 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 21 23:41:08 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 21 23:41:08 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 21 23:41:08 compute-0 sudo[213046]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:08 compute-0 setroubleshoot[213085]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 4ce80c30-e2ec-4f88-aeea-21fb5cd5b740
Jan 21 23:41:08 compute-0 ceph-mon[74318]: pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:08 compute-0 setroubleshoot[213085]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 21 23:41:08 compute-0 setroubleshoot[213085]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 4ce80c30-e2ec-4f88-aeea-21fb5cd5b740
Jan 21 23:41:08 compute-0 setroubleshoot[213085]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 21 23:41:08 compute-0 sudo[213274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egnqcjlwubfjjylvkrookknwypcmrpyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038868.3666568-3167-131859745951625/AnsiballZ_systemd.py'
Jan 21 23:41:08 compute-0 sudo[213274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:08.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:08 compute-0 python3.9[213276]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:41:09 compute-0 systemd[1]: Reloading.
Jan 21 23:41:09 compute-0 systemd-rc-local-generator[213303]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:41:09 compute-0 systemd-sysv-generator[213307]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:41:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:41:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:41:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:41:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:41:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:41:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:41:09 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 21 23:41:09 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 21 23:41:09 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 21 23:41:09 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 21 23:41:09 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 21 23:41:09 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 21 23:41:09 compute-0 sudo[213274]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:41:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:09.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:41:10 compute-0 sudo[213486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlpdytujjthqcqyyrivgyhifruqwzegp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038869.6980572-3167-268870137309978/AnsiballZ_systemd.py'
Jan 21 23:41:10 compute-0 sudo[213486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:10 compute-0 python3.9[213488]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:41:10 compute-0 systemd[1]: Reloading.
Jan 21 23:41:10 compute-0 systemd-sysv-generator[213519]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:41:10 compute-0 systemd-rc-local-generator[213515]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:41:10 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 21 23:41:10 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 21 23:41:10 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 21 23:41:10 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 21 23:41:10 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 21 23:41:10 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 21 23:41:10 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 21 23:41:10 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 21 23:41:10 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 21 23:41:10 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 21 23:41:10 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 21 23:41:10 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 21 23:41:10 compute-0 sudo[213486]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:10.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:11 compute-0 ceph-mon[74318]: pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:11 compute-0 sudo[213701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttjzgpllmnlnemupfkbdbldovijnevyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038871.1398427-3167-183518340612293/AnsiballZ_systemd.py'
Jan 21 23:41:11 compute-0 sudo[213701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:11 compute-0 python3.9[213703]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:41:11 compute-0 systemd[1]: Reloading.
Jan 21 23:41:11 compute-0 systemd-rc-local-generator[213732]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:41:11 compute-0 systemd-sysv-generator[213736]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:41:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:11.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:12 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 21 23:41:12 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 21 23:41:12 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 21 23:41:12 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 21 23:41:12 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 21 23:41:12 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 21 23:41:12 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 21 23:41:12 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 21 23:41:12 compute-0 sudo[213701]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:41:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:12.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:13 compute-0 ceph-mon[74318]: pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:13 compute-0 sudo[213914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jozkurvvhbnfylnbydqxoilhamwlhomt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038872.826291-3278-125421128202873/AnsiballZ_file.py'
Jan 21 23:41:13 compute-0 sudo[213914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:13 compute-0 python3.9[213916]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:13 compute-0 sudo[213914]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:41:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:13.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:41:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:14 compute-0 sudo[214067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncraxjbdihmiwbfkoixporscqxbomlpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038873.740355-3302-124985328778366/AnsiballZ_find.py'
Jan 21 23:41:14 compute-0 sudo[214067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:14 compute-0 python3.9[214069]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 23:41:14 compute-0 sudo[214067]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:41:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:14.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:41:14 compute-0 sudo[214219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkghkahrwrhbxzdxhzztnmnszoxsvejn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038874.5634396-3326-71042286663788/AnsiballZ_command.py'
Jan 21 23:41:14 compute-0 sudo[214219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:15 compute-0 python3.9[214221]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:41:15 compute-0 sudo[214219]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:15 compute-0 ceph-mon[74318]: pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:15.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:16 compute-0 python3.9[214376]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 23:41:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:41:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:16.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:41:17 compute-0 python3.9[214526]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:41:17 compute-0 ceph-mon[74318]: pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:41:17 compute-0 python3.9[214648]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769038876.6326764-3383-139798658056212/.source.xml follow=False _original_basename=secret.xml.j2 checksum=c457f588cd11e74674cdca2eab1683355966fc97 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:17.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:18 compute-0 sudo[214798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cibbvdoytpgzobdfkbcegyqwzeokzyfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038878.121536-3428-81817864395100/AnsiballZ_command.py'
Jan 21 23:41:18 compute-0 sudo[214798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:18 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 21 23:41:18 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.035s CPU time.
Jan 21 23:41:18 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 21 23:41:18 compute-0 python3.9[214800]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 3759241a-7f1c-520d-ba17-879943ee2f00
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:41:18 compute-0 polkitd[43428]: Registered Authentication Agent for unix-process:214802:341846 (system bus name :1.2881 [pkttyagent --process 214802 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 21 23:41:18 compute-0 polkitd[43428]: Unregistered Authentication Agent for unix-process:214802:341846 (system bus name :1.2881, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 21 23:41:18 compute-0 polkitd[43428]: Registered Authentication Agent for unix-process:214801:341845 (system bus name :1.2882 [pkttyagent --process 214801 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 21 23:41:18 compute-0 polkitd[43428]: Unregistered Authentication Agent for unix-process:214801:341845 (system bus name :1.2882, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 21 23:41:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:41:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:18.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:41:18 compute-0 sudo[214798]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:19 compute-0 ceph-mon[74318]: pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:19 compute-0 python3.9[214963]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:19.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:20 compute-0 sudo[215113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwtwumucabzmazovvjuymwvtjejgesxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038879.953295-3476-221454417442738/AnsiballZ_command.py'
Jan 21 23:41:20 compute-0 sudo[215113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:20 compute-0 sudo[215113]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:20 compute-0 sudo[215141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:41:20 compute-0 sudo[215141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:20 compute-0 sudo[215141]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:20 compute-0 sudo[215166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:41:20 compute-0 sudo[215166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:20 compute-0 sudo[215166]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:20.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:21 compute-0 sudo[215316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkgjuxykwdktvcigyebjagzchgysbdtc ; FSID=3759241a-7f1c-520d-ba17-879943ee2f00 KEY=AQCpX3FpAAAAABAAo4kgEsfAoeB8cTkM6A+wAA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038880.867396-3500-149834868266693/AnsiballZ_command.py'
Jan 21 23:41:21 compute-0 sudo[215316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:21 compute-0 ceph-mon[74318]: pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:21 compute-0 polkitd[43428]: Registered Authentication Agent for unix-process:215320:342112 (system bus name :1.2887 [pkttyagent --process 215320 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 21 23:41:21 compute-0 polkitd[43428]: Unregistered Authentication Agent for unix-process:215320:342112 (system bus name :1.2887, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 21 23:41:21 compute-0 sudo[215316]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:21.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:22 compute-0 podman[215400]: 2026-01-21 23:41:22.004838387 +0000 UTC m=+0.095799902 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 21 23:41:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:22 compute-0 sudo[215492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mykowkvymsxgbnosgagbnfxvjlggmfmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038881.8082175-3524-276442325304087/AnsiballZ_copy.py'
Jan 21 23:41:22 compute-0 sudo[215492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:22 compute-0 python3.9[215494]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:22 compute-0 sudo[215492]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:41:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:22.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:22 compute-0 sudo[215644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjwpjaoghjrgpstsxhhqtakpxbzznkth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038882.5843425-3548-55726207195980/AnsiballZ_stat.py'
Jan 21 23:41:22 compute-0 sudo[215644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:23 compute-0 python3.9[215646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:41:23 compute-0 sudo[215644]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:23 compute-0 ceph-mon[74318]: pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:23 compute-0 sudo[215768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcizpjbexxdvkgulqzjzgkrrqaxzwrny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038882.5843425-3548-55726207195980/AnsiballZ_copy.py'
Jan 21 23:41:23 compute-0 sudo[215768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:23 compute-0 python3.9[215770]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769038882.5843425-3548-55726207195980/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:23 compute-0 sudo[215768]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:23.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:24 compute-0 sudo[215920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbiwicpsoormwwqqkpgpfqzxprdpgdws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038884.1114237-3596-99531352001975/AnsiballZ_file.py'
Jan 21 23:41:24 compute-0 sudo[215920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:24 compute-0 python3.9[215922]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:24 compute-0 sudo[215920]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:41:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:24.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:41:25 compute-0 sudo[216072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdyathgkahdigviqsoekdzbwcbbfdkad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038884.8289356-3620-134950616965995/AnsiballZ_stat.py'
Jan 21 23:41:25 compute-0 sudo[216072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:25 compute-0 ceph-mon[74318]: pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:25 compute-0 python3.9[216074]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:41:25 compute-0 sudo[216072]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:25 compute-0 sudo[216151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyrthjlhchjmlcofzkzqipryubwdyvpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038884.8289356-3620-134950616965995/AnsiballZ_file.py'
Jan 21 23:41:25 compute-0 sudo[216151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:25 compute-0 python3.9[216153]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:25 compute-0 sudo[216151]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:25.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:26 compute-0 sudo[216303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzsimmnccriketrdhifsoynxqahhwkrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038886.3010595-3656-238498337305898/AnsiballZ_stat.py'
Jan 21 23:41:26 compute-0 sudo[216303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:26 compute-0 python3.9[216305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:41:26 compute-0 sudo[216303]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:41:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:26.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:41:27 compute-0 sudo[216381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyojmimmxavferkxgmjhbxsvrbmtiicx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038886.3010595-3656-238498337305898/AnsiballZ_file.py'
Jan 21 23:41:27 compute-0 sudo[216381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:27 compute-0 python3.9[216383]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.t2ncngj4 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:27 compute-0 sudo[216381]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:27 compute-0 ceph-mon[74318]: pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:41:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:41:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:27.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:41:28 compute-0 sudo[216534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxexkpcfritsftodnillumtygqffdhrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038887.7453547-3692-104289676166658/AnsiballZ_stat.py'
Jan 21 23:41:28 compute-0 sudo[216534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:28 compute-0 python3.9[216536]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:41:28 compute-0 sudo[216534]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:29.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:29 compute-0 ceph-mon[74318]: pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:29 compute-0 sudo[216613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvsuqxpznnvvfnqusobtkxfeluuiwsvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038887.7453547-3692-104289676166658/AnsiballZ_file.py'
Jan 21 23:41:29 compute-0 sudo[216613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:29 compute-0 python3.9[216615]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:29 compute-0 sudo[216613]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:29 compute-0 sudo[216639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:41:29 compute-0 sudo[216639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:29 compute-0 sudo[216639]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:29 compute-0 sudo[216665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:41:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:29.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:30 compute-0 sudo[216665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:30 compute-0 sudo[216665]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:30 compute-0 sudo[216690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:41:30 compute-0 sudo[216690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:30 compute-0 sudo[216690]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:30 compute-0 sudo[216739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:41:30 compute-0 sudo[216739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:30 compute-0 sudo[216877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abotcrlxqulagunwrinnbvonygksivyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038890.0702863-3731-188380134138853/AnsiballZ_command.py'
Jan 21 23:41:30 compute-0 sudo[216877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:30 compute-0 python3.9[216881]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:41:30 compute-0 sudo[216877]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:30 compute-0 ceph-mon[74318]: pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:30 compute-0 sudo[216739]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:41:30 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:41:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:41:30 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:41:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:41:30 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:41:30 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 8a90efc4-9b4d-401f-bd02-dfd92aff082b does not exist
Jan 21 23:41:30 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 8bad5c74-e391-4661-bd99-032c4548adb6 does not exist
Jan 21 23:41:30 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev bc51098a-66bf-49ac-902f-856e5028238a does not exist
Jan 21 23:41:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:41:30 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:41:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:41:30 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:41:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:41:30 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:41:30 compute-0 sudo[216945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:41:30 compute-0 sudo[216945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:30 compute-0 sudo[216945]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:30 compute-0 sudo[217001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:41:30 compute-0 sudo[217001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:30 compute-0 sudo[217001]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:30 compute-0 sudo[217026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:41:30 compute-0 sudo[217026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:30 compute-0 sudo[217026]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:31 compute-0 sudo[217051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:41:31 compute-0 sudo[217051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:31 compute-0 sudo[217177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlrpwfpmepjdngrkdwwbcjbbyupvrfyj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769038890.8132348-3755-280867951359157/AnsiballZ_edpm_nftables_from_files.py'
Jan 21 23:41:31 compute-0 sudo[217177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:31 compute-0 podman[217195]: 2026-01-21 23:41:31.395185138 +0000 UTC m=+0.045273392 container create b3ee50595d85ae7a00f44b2247b9e2531b43186c17686cf6128b92954a0d9c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kare, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 21 23:41:31 compute-0 systemd[1]: Started libpod-conmon-b3ee50595d85ae7a00f44b2247b9e2531b43186c17686cf6128b92954a0d9c15.scope.
Jan 21 23:41:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:31.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:31 compute-0 podman[217195]: 2026-01-21 23:41:31.378444107 +0000 UTC m=+0.028532391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:41:31 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:41:31 compute-0 podman[217195]: 2026-01-21 23:41:31.506893843 +0000 UTC m=+0.156982147 container init b3ee50595d85ae7a00f44b2247b9e2531b43186c17686cf6128b92954a0d9c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:41:31 compute-0 podman[217195]: 2026-01-21 23:41:31.519309211 +0000 UTC m=+0.169397465 container start b3ee50595d85ae7a00f44b2247b9e2531b43186c17686cf6128b92954a0d9c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kare, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 21 23:41:31 compute-0 podman[217195]: 2026-01-21 23:41:31.524715257 +0000 UTC m=+0.174803531 container attach b3ee50595d85ae7a00f44b2247b9e2531b43186c17686cf6128b92954a0d9c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 21 23:41:31 compute-0 dreamy_kare[217211]: 167 167
Jan 21 23:41:31 compute-0 systemd[1]: libpod-b3ee50595d85ae7a00f44b2247b9e2531b43186c17686cf6128b92954a0d9c15.scope: Deactivated successfully.
Jan 21 23:41:31 compute-0 conmon[217211]: conmon b3ee50595d85ae7a00f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3ee50595d85ae7a00f44b2247b9e2531b43186c17686cf6128b92954a0d9c15.scope/container/memory.events
Jan 21 23:41:31 compute-0 podman[217195]: 2026-01-21 23:41:31.527711487 +0000 UTC m=+0.177799761 container died b3ee50595d85ae7a00f44b2247b9e2531b43186c17686cf6128b92954a0d9c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kare, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 23:41:31 compute-0 python3[217179]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 21 23:41:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-86a43dfa61042eb6fad4d81d92e4f9b4812efcd74e1fe73325a457f26ec4d850-merged.mount: Deactivated successfully.
Jan 21 23:41:31 compute-0 podman[217195]: 2026-01-21 23:41:31.572990028 +0000 UTC m=+0.223078282 container remove b3ee50595d85ae7a00f44b2247b9e2531b43186c17686cf6128b92954a0d9c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:41:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:41:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:41:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:41:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:41:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:41:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:41:31 compute-0 sudo[217177]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:31 compute-0 systemd[1]: libpod-conmon-b3ee50595d85ae7a00f44b2247b9e2531b43186c17686cf6128b92954a0d9c15.scope: Deactivated successfully.
Jan 21 23:41:31 compute-0 podman[217259]: 2026-01-21 23:41:31.792924764 +0000 UTC m=+0.070562323 container create 6d448fd3149b5dfbde30fa26c6ccf1c93f5da04ba11a2a4a314e1fc1d97bbfde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 21 23:41:31 compute-0 systemd[1]: Started libpod-conmon-6d448fd3149b5dfbde30fa26c6ccf1c93f5da04ba11a2a4a314e1fc1d97bbfde.scope.
Jan 21 23:41:31 compute-0 podman[217259]: 2026-01-21 23:41:31.762666051 +0000 UTC m=+0.040303700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:41:31 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62293e1d70fdf9b56855be377133171b133b5765db4d936926666ddd8d9a5135/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62293e1d70fdf9b56855be377133171b133b5765db4d936926666ddd8d9a5135/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62293e1d70fdf9b56855be377133171b133b5765db4d936926666ddd8d9a5135/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62293e1d70fdf9b56855be377133171b133b5765db4d936926666ddd8d9a5135/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62293e1d70fdf9b56855be377133171b133b5765db4d936926666ddd8d9a5135/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:41:31 compute-0 podman[217259]: 2026-01-21 23:41:31.87613922 +0000 UTC m=+0.153776789 container init 6d448fd3149b5dfbde30fa26c6ccf1c93f5da04ba11a2a4a314e1fc1d97bbfde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:41:31 compute-0 podman[217259]: 2026-01-21 23:41:31.883471623 +0000 UTC m=+0.161109172 container start 6d448fd3149b5dfbde30fa26c6ccf1c93f5da04ba11a2a4a314e1fc1d97bbfde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:41:31 compute-0 podman[217259]: 2026-01-21 23:41:31.88664986 +0000 UTC m=+0.164287409 container attach 6d448fd3149b5dfbde30fa26c6ccf1c93f5da04ba11a2a4a314e1fc1d97bbfde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 21 23:41:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:31.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:32 compute-0 sudo[217405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqocvtcehkuryuyygwbstkojukwmrada ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038891.797313-3779-245791173939265/AnsiballZ_stat.py'
Jan 21 23:41:32 compute-0 sudo[217405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:32 compute-0 python3.9[217407]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:41:32 compute-0 sudo[217405]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:41:32 compute-0 ceph-mon[74318]: pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:32 compute-0 sudo[217487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avddduszunercsosfeelgytixugvqiaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038891.797313-3779-245791173939265/AnsiballZ_file.py'
Jan 21 23:41:32 compute-0 sudo[217487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:32 compute-0 eloquent_hoover[217316]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:41:32 compute-0 eloquent_hoover[217316]: --> relative data size: 1.0
Jan 21 23:41:32 compute-0 eloquent_hoover[217316]: --> All data devices are unavailable
Jan 21 23:41:32 compute-0 podman[217259]: 2026-01-21 23:41:32.803526433 +0000 UTC m=+1.081164022 container died 6d448fd3149b5dfbde30fa26c6ccf1c93f5da04ba11a2a4a314e1fc1d97bbfde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 21 23:41:32 compute-0 systemd[1]: libpod-6d448fd3149b5dfbde30fa26c6ccf1c93f5da04ba11a2a4a314e1fc1d97bbfde.scope: Deactivated successfully.
Jan 21 23:41:32 compute-0 python3.9[217491]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-62293e1d70fdf9b56855be377133171b133b5765db4d936926666ddd8d9a5135-merged.mount: Deactivated successfully.
Jan 21 23:41:32 compute-0 sudo[217487]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:32 compute-0 podman[217259]: 2026-01-21 23:41:32.859923353 +0000 UTC m=+1.137560902 container remove 6d448fd3149b5dfbde30fa26c6ccf1c93f5da04ba11a2a4a314e1fc1d97bbfde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 21 23:41:32 compute-0 systemd[1]: libpod-conmon-6d448fd3149b5dfbde30fa26c6ccf1c93f5da04ba11a2a4a314e1fc1d97bbfde.scope: Deactivated successfully.
Jan 21 23:41:32 compute-0 sudo[217051]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:32 compute-0 sudo[217531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:41:32 compute-0 sudo[217531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:32 compute-0 sudo[217531]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:33 compute-0 sudo[217558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:41:33 compute-0 sudo[217558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:33 compute-0 sudo[217558]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:33 compute-0 sudo[217583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:41:33 compute-0 sudo[217583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:33 compute-0 sudo[217583]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:33 compute-0 sudo[217631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:41:33 compute-0 sudo[217631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:33 compute-0 sudo[217784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpzgmadxrpnewbfukenhnyxillhifwsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038893.092897-3815-146125935876926/AnsiballZ_stat.py'
Jan 21 23:41:33 compute-0 sudo[217784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:33.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:33 compute-0 podman[217801]: 2026-01-21 23:41:33.56014818 +0000 UTC m=+0.050433249 container create 4f2d0ff697fdc4ae040f91b6834ecd14acbf998ab6f1073e930e8c7752bceb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 23:41:33 compute-0 systemd[1]: Started libpod-conmon-4f2d0ff697fdc4ae040f91b6834ecd14acbf998ab6f1073e930e8c7752bceb3b.scope.
Jan 21 23:41:33 compute-0 podman[217801]: 2026-01-21 23:41:33.541349347 +0000 UTC m=+0.031634456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:41:33 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:41:33 compute-0 python3.9[217789]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:41:33 compute-0 podman[217801]: 2026-01-21 23:41:33.656413295 +0000 UTC m=+0.146698404 container init 4f2d0ff697fdc4ae040f91b6834ecd14acbf998ab6f1073e930e8c7752bceb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 23:41:33 compute-0 podman[217801]: 2026-01-21 23:41:33.662525441 +0000 UTC m=+0.152810520 container start 4f2d0ff697fdc4ae040f91b6834ecd14acbf998ab6f1073e930e8c7752bceb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldwasser, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:41:33 compute-0 podman[217801]: 2026-01-21 23:41:33.666061279 +0000 UTC m=+0.156346358 container attach 4f2d0ff697fdc4ae040f91b6834ecd14acbf998ab6f1073e930e8c7752bceb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 21 23:41:33 compute-0 cool_goldwasser[217818]: 167 167
Jan 21 23:41:33 compute-0 systemd[1]: libpod-4f2d0ff697fdc4ae040f91b6834ecd14acbf998ab6f1073e930e8c7752bceb3b.scope: Deactivated successfully.
Jan 21 23:41:33 compute-0 podman[217801]: 2026-01-21 23:41:33.672219336 +0000 UTC m=+0.162504415 container died 4f2d0ff697fdc4ae040f91b6834ecd14acbf998ab6f1073e930e8c7752bceb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldwasser, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 23:41:33 compute-0 sudo[217784]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf7610a98f244f0de78e94e7d84e1fb3d74c5e82bb230352f3909a9ada7661c8-merged.mount: Deactivated successfully.
Jan 21 23:41:33 compute-0 podman[217801]: 2026-01-21 23:41:33.71396764 +0000 UTC m=+0.204252719 container remove 4f2d0ff697fdc4ae040f91b6834ecd14acbf998ab6f1073e930e8c7752bceb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 21 23:41:33 compute-0 systemd[1]: libpod-conmon-4f2d0ff697fdc4ae040f91b6834ecd14acbf998ab6f1073e930e8c7752bceb3b.scope: Deactivated successfully.
Jan 21 23:41:33 compute-0 podman[217909]: 2026-01-21 23:41:33.910446099 +0000 UTC m=+0.054365258 container create 6595d88a17d935c7a1efdc30810feb8c878210f541f9a3cda10e6ae79909943d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:41:33 compute-0 systemd[1]: Started libpod-conmon-6595d88a17d935c7a1efdc30810feb8c878210f541f9a3cda10e6ae79909943d.scope.
Jan 21 23:41:33 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f091d47009ee88b2bf24179a3872931b2e956ca99a7edd4d93fdc4c99815633/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f091d47009ee88b2bf24179a3872931b2e956ca99a7edd4d93fdc4c99815633/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f091d47009ee88b2bf24179a3872931b2e956ca99a7edd4d93fdc4c99815633/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f091d47009ee88b2bf24179a3872931b2e956ca99a7edd4d93fdc4c99815633/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:41:33 compute-0 podman[217909]: 2026-01-21 23:41:33.883417746 +0000 UTC m=+0.027336935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:41:33 compute-0 podman[217909]: 2026-01-21 23:41:33.97998381 +0000 UTC m=+0.123902989 container init 6595d88a17d935c7a1efdc30810feb8c878210f541f9a3cda10e6ae79909943d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_banzai, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:41:33 compute-0 podman[217909]: 2026-01-21 23:41:33.990423908 +0000 UTC m=+0.134343067 container start 6595d88a17d935c7a1efdc30810feb8c878210f541f9a3cda10e6ae79909943d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:41:33 compute-0 podman[217909]: 2026-01-21 23:41:33.993582524 +0000 UTC m=+0.137501713 container attach 6595d88a17d935c7a1efdc30810feb8c878210f541f9a3cda10e6ae79909943d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_banzai, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:41:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:33.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:34 compute-0 sudo[217986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuneehdjucgnbtsgffvxsxquwpiffsfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038893.092897-3815-146125935876926/AnsiballZ_copy.py'
Jan 21 23:41:34 compute-0 sudo[217986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:34 compute-0 python3.9[217988]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038893.092897-3815-146125935876926/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:34 compute-0 sudo[217986]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:34 compute-0 nifty_banzai[217955]: {
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:     "1": [
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:         {
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:             "devices": [
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:                 "/dev/loop3"
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:             ],
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:             "lv_name": "ceph_lv0",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:             "lv_size": "7511998464",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:             "name": "ceph_lv0",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:             "tags": {
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:                 "ceph.cluster_name": "ceph",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:                 "ceph.crush_device_class": "",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:                 "ceph.encrypted": "0",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:                 "ceph.osd_id": "1",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:                 "ceph.type": "block",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:                 "ceph.vdo": "0"
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:             },
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:             "type": "block",
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:             "vg_name": "ceph_vg0"
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:         }
Jan 21 23:41:34 compute-0 nifty_banzai[217955]:     ]
Jan 21 23:41:34 compute-0 nifty_banzai[217955]: }
Jan 21 23:41:34 compute-0 systemd[1]: libpod-6595d88a17d935c7a1efdc30810feb8c878210f541f9a3cda10e6ae79909943d.scope: Deactivated successfully.
Jan 21 23:41:34 compute-0 conmon[217955]: conmon 6595d88a17d935c7a1ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6595d88a17d935c7a1efdc30810feb8c878210f541f9a3cda10e6ae79909943d.scope/container/memory.events
Jan 21 23:41:34 compute-0 podman[217909]: 2026-01-21 23:41:34.82628217 +0000 UTC m=+0.970201339 container died 6595d88a17d935c7a1efdc30810feb8c878210f541f9a3cda10e6ae79909943d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:41:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f091d47009ee88b2bf24179a3872931b2e956ca99a7edd4d93fdc4c99815633-merged.mount: Deactivated successfully.
Jan 21 23:41:34 compute-0 podman[217909]: 2026-01-21 23:41:34.879811312 +0000 UTC m=+1.023730471 container remove 6595d88a17d935c7a1efdc30810feb8c878210f541f9a3cda10e6ae79909943d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_banzai, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 23:41:34 compute-0 systemd[1]: libpod-conmon-6595d88a17d935c7a1efdc30810feb8c878210f541f9a3cda10e6ae79909943d.scope: Deactivated successfully.
Jan 21 23:41:34 compute-0 sudo[218155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrrnvrtrugwjkybtjldnibpvgqibuept ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038894.5194552-3860-258320153860834/AnsiballZ_stat.py'
Jan 21 23:41:34 compute-0 sudo[218155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:34 compute-0 sudo[217631]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:34 compute-0 sudo[218158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:41:34 compute-0 sudo[218158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:34 compute-0 sudo[218158]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:35 compute-0 sudo[218183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:41:35 compute-0 sudo[218183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:35 compute-0 sudo[218183]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:35 compute-0 python3.9[218157]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:41:35 compute-0 sudo[218155]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:35 compute-0 sudo[218208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:41:35 compute-0 sudo[218208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:35 compute-0 sudo[218208]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:35 compute-0 sudo[218235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:41:35 compute-0 sudo[218235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:35 compute-0 ceph-mon[74318]: pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:35 compute-0 sudo[218345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ichetbodkiaeptutesxncujwuftiejeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038894.5194552-3860-258320153860834/AnsiballZ_file.py'
Jan 21 23:41:35 compute-0 sudo[218345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:35.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:35 compute-0 podman[218378]: 2026-01-21 23:41:35.553039007 +0000 UTC m=+0.046939283 container create 6bed02f294ac87909575faaf9847ca82d70574e6c8fc22c54f577985688e310f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:41:35 compute-0 python3.9[218350]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:35 compute-0 sudo[218345]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:35 compute-0 systemd[1]: Started libpod-conmon-6bed02f294ac87909575faaf9847ca82d70574e6c8fc22c54f577985688e310f.scope.
Jan 21 23:41:35 compute-0 podman[218378]: 2026-01-21 23:41:35.534134759 +0000 UTC m=+0.028035085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:41:35 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:41:35 compute-0 podman[218378]: 2026-01-21 23:41:35.658400048 +0000 UTC m=+0.152300344 container init 6bed02f294ac87909575faaf9847ca82d70574e6c8fc22c54f577985688e310f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_diffie, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:41:35 compute-0 podman[218378]: 2026-01-21 23:41:35.668158085 +0000 UTC m=+0.162058381 container start 6bed02f294ac87909575faaf9847ca82d70574e6c8fc22c54f577985688e310f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_diffie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 21 23:41:35 compute-0 podman[218378]: 2026-01-21 23:41:35.672409725 +0000 UTC m=+0.166310021 container attach 6bed02f294ac87909575faaf9847ca82d70574e6c8fc22c54f577985688e310f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_diffie, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 21 23:41:35 compute-0 agitated_diffie[218401]: 167 167
Jan 21 23:41:35 compute-0 systemd[1]: libpod-6bed02f294ac87909575faaf9847ca82d70574e6c8fc22c54f577985688e310f.scope: Deactivated successfully.
Jan 21 23:41:35 compute-0 podman[218378]: 2026-01-21 23:41:35.676037596 +0000 UTC m=+0.169937942 container died 6bed02f294ac87909575faaf9847ca82d70574e6c8fc22c54f577985688e310f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:41:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-791ac6f15f52e9e250e4cf5992e26db36f5c051f3c9bcd7cf90800726fc6157b-merged.mount: Deactivated successfully.
Jan 21 23:41:35 compute-0 podman[218378]: 2026-01-21 23:41:35.736049455 +0000 UTC m=+0.229949741 container remove 6bed02f294ac87909575faaf9847ca82d70574e6c8fc22c54f577985688e310f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:41:35 compute-0 systemd[1]: libpod-conmon-6bed02f294ac87909575faaf9847ca82d70574e6c8fc22c54f577985688e310f.scope: Deactivated successfully.
Jan 21 23:41:35 compute-0 podman[218403]: 2026-01-21 23:41:35.767623818 +0000 UTC m=+0.121454064 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 21 23:41:35 compute-0 podman[218488]: 2026-01-21 23:41:35.929135542 +0000 UTC m=+0.038115763 container create 3dea17af8243194c8acb581610046ff75686e45d8e282bde8dbfe12b9d91d64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:41:35 compute-0 systemd[1]: Started libpod-conmon-3dea17af8243194c8acb581610046ff75686e45d8e282bde8dbfe12b9d91d64e.scope.
Jan 21 23:41:36 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:41:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:36.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6dbfa2634a8b5e7eda3d1e941f8c8d6b0a10591d4733bb2544523627e95310/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6dbfa2634a8b5e7eda3d1e941f8c8d6b0a10591d4733bb2544523627e95310/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6dbfa2634a8b5e7eda3d1e941f8c8d6b0a10591d4733bb2544523627e95310/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6dbfa2634a8b5e7eda3d1e941f8c8d6b0a10591d4733bb2544523627e95310/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:41:36 compute-0 podman[218488]: 2026-01-21 23:41:35.912875726 +0000 UTC m=+0.021855967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:41:36 compute-0 podman[218488]: 2026-01-21 23:41:36.029748299 +0000 UTC m=+0.138728570 container init 3dea17af8243194c8acb581610046ff75686e45d8e282bde8dbfe12b9d91d64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 23:41:36 compute-0 podman[218488]: 2026-01-21 23:41:36.037671851 +0000 UTC m=+0.146652072 container start 3dea17af8243194c8acb581610046ff75686e45d8e282bde8dbfe12b9d91d64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 21 23:41:36 compute-0 podman[218488]: 2026-01-21 23:41:36.04061184 +0000 UTC m=+0.149592111 container attach 3dea17af8243194c8acb581610046ff75686e45d8e282bde8dbfe12b9d91d64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 21 23:41:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:36 compute-0 sudo[218611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yopjnarilswhikkhufkxuyhqomcihqmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038895.8775592-3896-99470850686042/AnsiballZ_stat.py'
Jan 21 23:41:36 compute-0 sudo[218611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:36 compute-0 python3.9[218613]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:41:36 compute-0 sudo[218611]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:36 compute-0 sudo[218694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbfecewlwbwpmhjydcpbdgsqtyqahsnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038895.8775592-3896-99470850686042/AnsiballZ_file.py'
Jan 21 23:41:36 compute-0 sudo[218694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:36 compute-0 nice_edison[218533]: {
Jan 21 23:41:36 compute-0 nice_edison[218533]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:41:36 compute-0 nice_edison[218533]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:41:36 compute-0 nice_edison[218533]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:41:36 compute-0 nice_edison[218533]:         "osd_id": 1,
Jan 21 23:41:36 compute-0 nice_edison[218533]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:41:36 compute-0 nice_edison[218533]:         "type": "bluestore"
Jan 21 23:41:36 compute-0 nice_edison[218533]:     }
Jan 21 23:41:36 compute-0 nice_edison[218533]: }
Jan 21 23:41:36 compute-0 systemd[1]: libpod-3dea17af8243194c8acb581610046ff75686e45d8e282bde8dbfe12b9d91d64e.scope: Deactivated successfully.
Jan 21 23:41:36 compute-0 conmon[218533]: conmon 3dea17af8243194c8acb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3dea17af8243194c8acb581610046ff75686e45d8e282bde8dbfe12b9d91d64e.scope/container/memory.events
Jan 21 23:41:36 compute-0 podman[218488]: 2026-01-21 23:41:36.939145513 +0000 UTC m=+1.048125764 container died 3dea17af8243194c8acb581610046ff75686e45d8e282bde8dbfe12b9d91d64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 23:41:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d6dbfa2634a8b5e7eda3d1e941f8c8d6b0a10591d4733bb2544523627e95310-merged.mount: Deactivated successfully.
Jan 21 23:41:36 compute-0 podman[218488]: 2026-01-21 23:41:36.998818653 +0000 UTC m=+1.107798874 container remove 3dea17af8243194c8acb581610046ff75686e45d8e282bde8dbfe12b9d91d64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 21 23:41:37 compute-0 python3.9[218699]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:37 compute-0 systemd[1]: libpod-conmon-3dea17af8243194c8acb581610046ff75686e45d8e282bde8dbfe12b9d91d64e.scope: Deactivated successfully.
Jan 21 23:41:37 compute-0 sudo[218694]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:37 compute-0 sudo[218235]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:41:37 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:41:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:41:37 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:41:37 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 96e3a8e2-7d23-4d06-8a50-a937117131f1 does not exist
Jan 21 23:41:37 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 86d4545a-af4d-4df9-aff9-a090680ad9d7 does not exist
Jan 21 23:41:37 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 8ada6887-a762-414d-bd38-2368d9008936 does not exist
Jan 21 23:41:37 compute-0 sudo[218725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:41:37 compute-0 sudo[218725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:37 compute-0 sudo[218725]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:37 compute-0 sudo[218767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:41:37 compute-0 sudo[218767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:37 compute-0 ceph-mon[74318]: pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:37 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:41:37 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:41:37 compute-0 sudo[218767]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:37.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:41:37 compute-0 sudo[218918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyewihnjlzaxovzussaierpgwaezfapg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038897.2262082-3932-127661017294671/AnsiballZ_stat.py'
Jan 21 23:41:37 compute-0 sudo[218918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:37 compute-0 python3.9[218920]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:41:37 compute-0 sudo[218918]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:38.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:38 compute-0 sudo[219043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzpvvrrgtkbhovhwwfzzyltrrpuudbnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038897.2262082-3932-127661017294671/AnsiballZ_copy.py'
Jan 21 23:41:38 compute-0 sudo[219043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:38 compute-0 python3.9[219045]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769038897.2262082-3932-127661017294671/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:38 compute-0 sudo[219043]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:39 compute-0 sudo[219195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttyijzvurbsqfaafzaxehtqadyfjeoqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038898.8086941-3977-273263825014604/AnsiballZ_file.py'
Jan 21 23:41:39 compute-0 sudo[219195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:41:39
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'images', '.mgr', 'vms', 'default.rgw.log']
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:41:39 compute-0 python3.9[219197]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:39 compute-0 ceph-mon[74318]: pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:39 compute-0 sudo[219195]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:41:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:41:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:39.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:39 compute-0 sudo[219348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhyaflvzqpxxkaldfpwjyxqkixufrkjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038899.6168554-4001-115432875849817/AnsiballZ_command.py'
Jan 21 23:41:39 compute-0 sudo[219348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:40.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:40 compute-0 python3.9[219350]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:41:40 compute-0 sudo[219348]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:40 compute-0 sudo[219453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:41:40 compute-0 sudo[219453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:40 compute-0 sudo[219453]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:40 compute-0 sudo[219501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:41:40 compute-0 sudo[219501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:41:40 compute-0 sudo[219501]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:41 compute-0 sudo[219553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qewcqbmlpevicnvpkxtptmjncrxytgop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038900.5049286-4025-73482834119135/AnsiballZ_blockinfile.py'
Jan 21 23:41:41 compute-0 sudo[219553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:41 compute-0 python3.9[219555]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:41 compute-0 sudo[219553]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:41 compute-0 ceph-mon[74318]: pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:41.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:41 compute-0 sudo[219706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufpdgtbzpukzwtrrrmzsnkhmjoaeswkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038901.6498396-4052-114954936401538/AnsiballZ_command.py'
Jan 21 23:41:41 compute-0 sudo[219706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:42.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:42 compute-0 python3.9[219708]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:41:42 compute-0 sudo[219706]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:42 compute-0 ceph-mon[74318]: pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:41:42 compute-0 sudo[219859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtdgykqlkftslooglzfidxhmqdiiqjcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038902.4793682-4076-197753906698859/AnsiballZ_stat.py'
Jan 21 23:41:42 compute-0 sudo[219859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:43 compute-0 python3.9[219861]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:41:43 compute-0 sudo[219859]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:43.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:43 compute-0 sudo[220014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzznasxxmkaexbrwngnxjnxtpmtptmwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038903.5698316-4100-81330704028079/AnsiballZ_command.py'
Jan 21 23:41:43 compute-0 sudo[220014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:44.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:44 compute-0 python3.9[220016]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:41:44 compute-0 sudo[220014]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:44 compute-0 sudo[220169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhalfoiszgmdqjxcuxciyavsnqlpkmhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038904.4210052-4124-90068970338065/AnsiballZ_file.py'
Jan 21 23:41:44 compute-0 sudo[220169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:45 compute-0 python3.9[220171]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:45 compute-0 sudo[220169]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:45 compute-0 ceph-mon[74318]: pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:45.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:45 compute-0 sudo[220322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxnytjfnorbfogmaptlyogjtjozhparg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038905.3067815-4148-219694897215406/AnsiballZ_stat.py'
Jan 21 23:41:45 compute-0 sudo[220322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:45 compute-0 python3.9[220324]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:41:45 compute-0 sudo[220322]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:46.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:46 compute-0 sudo[220445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqyscasckottevoylvuehnoggdhxntnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038905.3067815-4148-219694897215406/AnsiballZ_copy.py'
Jan 21 23:41:46 compute-0 sudo[220445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:46 compute-0 python3.9[220447]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769038905.3067815-4148-219694897215406/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:46 compute-0 sudo[220445]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:47 compute-0 ceph-mon[74318]: pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:47 compute-0 sudo[220597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybcdeohpsidmdzodgfrqyncupjfxhkov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038906.83575-4193-157834960143298/AnsiballZ_stat.py'
Jan 21 23:41:47 compute-0 sudo[220597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:47 compute-0 python3.9[220599]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:41:47 compute-0 sudo[220597]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:41:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:47.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:41:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:41:47 compute-0 sudo[220721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njncgxpfaoqnnmqysmcwpcbxzzqikmhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038906.83575-4193-157834960143298/AnsiballZ_copy.py'
Jan 21 23:41:47 compute-0 sudo[220721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:48.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:48 compute-0 python3.9[220723]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769038906.83575-4193-157834960143298/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:48 compute-0 sudo[220721]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:48 compute-0 sudo[220873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llzlviuytqkxlmusmizvnbsfunujzgaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038908.316109-4238-209553317802647/AnsiballZ_stat.py'
Jan 21 23:41:48 compute-0 sudo[220873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:41:48.735 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:41:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:41:48.738 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:41:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:41:48.738 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:41:48 compute-0 python3.9[220875]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:41:48 compute-0 sudo[220873]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:49 compute-0 ceph-mon[74318]: pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:49 compute-0 sudo[220996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nizothyzftjgrqlxraydombegnnfpklr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038908.316109-4238-209553317802647/AnsiballZ_copy.py'
Jan 21 23:41:49 compute-0 sudo[220996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:49 compute-0 python3.9[220998]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769038908.316109-4238-209553317802647/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:41:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:49.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:49 compute-0 sudo[220996]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:41:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:50.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:41:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:50 compute-0 sudo[221149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjdrsyhlkosantcqrpidognuazjsyxwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038909.8766127-4283-22298777793241/AnsiballZ_systemd.py'
Jan 21 23:41:50 compute-0 sudo[221149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:50 compute-0 python3.9[221151]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:41:50 compute-0 systemd[1]: Reloading.
Jan 21 23:41:50 compute-0 systemd-rc-local-generator[221174]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:41:50 compute-0 systemd-sysv-generator[221178]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:41:50 compute-0 ceph-mon[74318]: pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:50 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 21 23:41:50 compute-0 sudo[221149]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:51.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:51 compute-0 sudo[221341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrgcsyworpahxuioaadwwvijjpqeiybu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038911.3607345-4307-221950387907302/AnsiballZ_systemd.py'
Jan 21 23:41:51 compute-0 sudo[221341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:41:51 compute-0 python3.9[221343]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 21 23:41:51 compute-0 systemd[1]: Reloading.
Jan 21 23:41:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:52.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:52 compute-0 systemd-sysv-generator[221374]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:41:52 compute-0 systemd-rc-local-generator[221367]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:41:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:52 compute-0 systemd[1]: Reloading.
Jan 21 23:41:52 compute-0 podman[221379]: 2026-01-21 23:41:52.377549841 +0000 UTC m=+0.074792811 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 21 23:41:52 compute-0 systemd-sysv-generator[221429]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:41:52 compute-0 systemd-rc-local-generator[221426]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:41:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:41:52 compute-0 sudo[221341]: pam_unix(sudo:session): session closed for user root
Jan 21 23:41:53 compute-0 sshd-session[159890]: Connection closed by 192.168.122.30 port 39256
Jan 21 23:41:53 compute-0 sshd-session[159868]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:41:53 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Jan 21 23:41:53 compute-0 systemd[1]: session-49.scope: Consumed 3min 52.703s CPU time.
Jan 21 23:41:53 compute-0 systemd-logind[786]: Session 49 logged out. Waiting for processes to exit.
Jan 21 23:41:53 compute-0 systemd-logind[786]: Removed session 49.
Jan 21 23:41:53 compute-0 ceph-mon[74318]: pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:53.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.004000123s ======
Jan 21 23:41:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:54.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000123s
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:41:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:55 compute-0 ceph-mon[74318]: pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:41:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:55.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:41:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:56.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:57 compute-0 ceph-mon[74318]: pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:41:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:57.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:41:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:41:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:41:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:41:58.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:41:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:58 compute-0 sshd-session[221462]: Accepted publickey for zuul from 192.168.122.30 port 39350 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:41:58 compute-0 systemd-logind[786]: New session 50 of user zuul.
Jan 21 23:41:58 compute-0 systemd[1]: Started Session 50 of User zuul.
Jan 21 23:41:58 compute-0 sshd-session[221462]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:41:59 compute-0 ceph-mon[74318]: pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:41:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:41:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:41:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:41:59.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:41:59 compute-0 python3.9[221615]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:42:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:42:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:00.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:42:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:01 compute-0 sudo[221771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:42:01 compute-0 sudo[221771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:01 compute-0 sudo[221771]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:01 compute-0 python3.9[221770]: ansible-ansible.builtin.service_facts Invoked
Jan 21 23:42:01 compute-0 sudo[221796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:42:01 compute-0 sudo[221796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:01 compute-0 sudo[221796]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:01 compute-0 network[221837]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 23:42:01 compute-0 network[221838]: 'network-scripts' will be removed from distribution in near future.
Jan 21 23:42:01 compute-0 network[221839]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 23:42:01 compute-0 ceph-mon[74318]: pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:01.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:02.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:42:03 compute-0 ceph-mon[74318]: pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:42:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:03.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:42:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:04.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:05 compute-0 ceph-mon[74318]: pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:05.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:05 compute-0 podman[221952]: 2026-01-21 23:42:05.955410228 +0000 UTC m=+0.114072029 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 21 23:42:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:06.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:06 compute-0 sudo[222138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdazqgskameikvgisplhhlugcktooblz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038926.558051-101-172270963777546/AnsiballZ_setup.py'
Jan 21 23:42:06 compute-0 sudo[222138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:07 compute-0 python3.9[222140]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 23:42:07 compute-0 ceph-mon[74318]: pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:42:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:07.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:07 compute-0 sudo[222138]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:08.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:08 compute-0 sudo[222223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gthbtsmipexbanhkymfllvfktwvujjat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038926.558051-101-172270963777546/AnsiballZ_dnf.py'
Jan 21 23:42:08 compute-0 sudo[222223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:08 compute-0 python3.9[222225]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:42:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:42:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:42:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:42:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:42:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:42:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:42:09 compute-0 ceph-mon[74318]: pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:09.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:42:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:10.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:42:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:11 compute-0 ceph-mon[74318]: pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:42:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:11.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:42:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:12.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:12 compute-0 ceph-mon[74318]: pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:42:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:13.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:13 compute-0 sudo[222223]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:14.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:14 compute-0 sudo[222379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjvafyfrsaxrcpseillsgzsphnnbzwmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038933.9039242-137-98061463079053/AnsiballZ_stat.py'
Jan 21 23:42:14 compute-0 sudo[222379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:14 compute-0 python3.9[222381]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:42:14 compute-0 sudo[222379]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:15 compute-0 ceph-mon[74318]: pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:15.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:15 compute-0 sudo[222532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teemacodyorlvzwvywkjfzsviffyxfmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038935.0417032-167-280170762132948/AnsiballZ_command.py'
Jan 21 23:42:15 compute-0 sudo[222532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:15 compute-0 python3.9[222534]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:42:15 compute-0 sudo[222532]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:16.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:16 compute-0 sudo[222685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhjxmcalprjwcieyckpjuexpfecqrnhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038936.2571533-197-153665460776347/AnsiballZ_stat.py'
Jan 21 23:42:16 compute-0 sudo[222685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:16 compute-0 python3.9[222687]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:42:16 compute-0 sudo[222685]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:17 compute-0 ceph-mon[74318]: pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:17 compute-0 sudo[222838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxiuyoumejnkqcmfnnikpzitgayggecq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038937.069543-221-197588071985496/AnsiballZ_command.py'
Jan 21 23:42:17 compute-0 sudo[222838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:42:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:17.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:17 compute-0 python3.9[222840]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:42:17 compute-0 sudo[222838]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:42:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:18.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:42:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:18 compute-0 sudo[222991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoajlkyhattqumvkbaddgnzitprgfjse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038937.8981512-245-152477085404208/AnsiballZ_stat.py'
Jan 21 23:42:18 compute-0 sudo[222991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:18 compute-0 python3.9[222993]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:42:18 compute-0 sudo[222991]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:19 compute-0 sudo[223114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxfeuubctvfngguoiwumoculvlqjphyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038937.8981512-245-152477085404208/AnsiballZ_copy.py'
Jan 21 23:42:19 compute-0 sudo[223114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:19 compute-0 ceph-mon[74318]: pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:19 compute-0 python3.9[223116]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769038937.8981512-245-152477085404208/.source.iscsi _original_basename=.36gu1244 follow=False checksum=8981e6503ea23aa2b1547fd792311f6280287903 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:42:19 compute-0 sudo[223114]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:19.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:20.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:20 compute-0 sudo[223267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hssleitmoiscuverbyomdvdokmsowfbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038939.5216265-290-118129347005153/AnsiballZ_file.py'
Jan 21 23:42:20 compute-0 sudo[223267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:20 compute-0 python3.9[223269]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:42:20 compute-0 sudo[223267]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:21 compute-0 sudo[223419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nloymcsxyxwgtqfcdagwldbinkadwidn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038940.5319927-314-264828608975608/AnsiballZ_lineinfile.py'
Jan 21 23:42:21 compute-0 sudo[223419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:21 compute-0 ceph-mon[74318]: pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:21 compute-0 sudo[223422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:42:21 compute-0 sudo[223422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:21 compute-0 sudo[223422]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:21 compute-0 python3.9[223421]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:42:21 compute-0 sudo[223419]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:21 compute-0 sudo[223447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:42:21 compute-0 sudo[223447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:21 compute-0 sudo[223447]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:42:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:21.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:42:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:22.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:22 compute-0 sudo[223622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzyoskhijidrrhsvgibcjnqptjkdqibp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038941.6566348-341-275408349923016/AnsiballZ_systemd_service.py'
Jan 21 23:42:22 compute-0 sudo[223622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:42:22 compute-0 python3.9[223624]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:42:22 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 21 23:42:22 compute-0 podman[223626]: 2026-01-21 23:42:22.747713611 +0000 UTC m=+0.066796598 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 21 23:42:22 compute-0 sudo[223622]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:23 compute-0 ceph-mon[74318]: pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:23 compute-0 sudo[223800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lximehyvbzaxxivoxsswzaleyrrbhjqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038943.0374181-365-59683223288572/AnsiballZ_systemd_service.py'
Jan 21 23:42:23 compute-0 sudo[223800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:23.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:23 compute-0 python3.9[223802]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:42:23 compute-0 systemd[1]: Reloading.
Jan 21 23:42:23 compute-0 systemd-rc-local-generator[223832]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:42:23 compute-0 systemd-sysv-generator[223835]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:42:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:24.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:24 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 21 23:42:24 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 21 23:42:24 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 21 23:42:24 compute-0 systemd[1]: Started Open-iSCSI.
Jan 21 23:42:24 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 21 23:42:24 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 21 23:42:24 compute-0 sudo[223800]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:25 compute-0 ceph-mon[74318]: pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:25 compute-0 python3.9[224001]: ansible-ansible.builtin.service_facts Invoked
Jan 21 23:42:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:42:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:25.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:42:25 compute-0 network[224019]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 23:42:25 compute-0 network[224020]: 'network-scripts' will be removed from distribution in near future.
Jan 21 23:42:25 compute-0 network[224021]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 23:42:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:42:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:26.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:42:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:27 compute-0 ceph-mon[74318]: pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:42:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:27.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:42:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:28.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:42:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:28 compute-0 ceph-mon[74318]: pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:42:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:29.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:42:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:30.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:31 compute-0 ceph-mon[74318]: pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:31 compute-0 sudo[224294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oswjnyifzgzihdnxdgumcgcstilngbxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038951.0383377-434-107619157765824/AnsiballZ_dnf.py'
Jan 21 23:42:31 compute-0 sudo[224294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:42:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:31.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:42:31 compute-0 python3.9[224296]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:42:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:32.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:42:33 compute-0 ceph-mon[74318]: pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:33.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:34 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 23:42:34 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 23:42:34 compute-0 systemd[1]: Reloading.
Jan 21 23:42:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:34.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:34 compute-0 systemd-rc-local-generator[224344]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:42:34 compute-0 systemd-sysv-generator[224348]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:42:34 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 23:42:34 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 23:42:34 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 23:42:34 compute-0 systemd[1]: run-r59132a22a1b04589ba7c3f27c689c908.service: Deactivated successfully.
Jan 21 23:42:34 compute-0 sudo[224294]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:35 compute-0 ceph-mon[74318]: pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:35.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:35 compute-0 sudo[224612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goqqwtisjmgqduxjwsexwhsobdrvxtxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038955.4855607-461-174953096935651/AnsiballZ_file.py'
Jan 21 23:42:35 compute-0 sudo[224612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:35 compute-0 python3.9[224614]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 21 23:42:35 compute-0 sudo[224612]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:36.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:36 compute-0 sudo[224775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-troptpgakcgufwgnllykhkxcirvltfxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038956.4494336-485-70450585459934/AnsiballZ_modprobe.py'
Jan 21 23:42:36 compute-0 sudo[224775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:37 compute-0 podman[224738]: 2026-01-21 23:42:37.00885019 +0000 UTC m=+0.137502925 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:42:37 compute-0 python3.9[224779]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 21 23:42:37 compute-0 sudo[224775]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:37 compute-0 ceph-mon[74318]: pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:42:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:37.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:37 compute-0 sudo[224868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:42:37 compute-0 sudo[224868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:37 compute-0 sudo[224868]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:37 compute-0 sudo[224899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:42:37 compute-0 sudo[224899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:37 compute-0 sudo[224899]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:37 compute-0 sudo[224947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:42:37 compute-0 sudo[224947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:37 compute-0 sudo[224947]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:37 compute-0 sudo[224993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:42:37 compute-0 sudo[224993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:37 compute-0 sudo[225047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tylsncshekvfeuamzvproutatdetafak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038957.4596903-509-214600392706681/AnsiballZ_stat.py'
Jan 21 23:42:37 compute-0 sudo[225047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:38 compute-0 python3.9[225049]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:42:38 compute-0 sudo[225047]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:38.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:38 compute-0 sudo[224993]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:42:38 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:42:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:42:38 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:42:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:42:38 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:42:38 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 8b07c80b-0dcc-4e01-8796-e06816504bf4 does not exist
Jan 21 23:42:38 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev c3a8ae53-a8e2-480f-98b6-cba4a02c1a76 does not exist
Jan 21 23:42:38 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev f81157c1-1d2e-48ea-b276-af88b78eeec3 does not exist
Jan 21 23:42:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:42:38 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:42:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:42:38 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:42:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:42:38 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:42:38 compute-0 sudo[225208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhxnjsxiqfzryobvmbsonzxeqxbtsqvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038957.4596903-509-214600392706681/AnsiballZ_copy.py'
Jan 21 23:42:38 compute-0 sudo[225208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:38 compute-0 sudo[225202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:42:38 compute-0 sudo[225202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:38 compute-0 sudo[225202]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:38 compute-0 sudo[225232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:42:38 compute-0 sudo[225232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:38 compute-0 sudo[225232]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:38 compute-0 sudo[225257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:42:38 compute-0 sudo[225257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:38 compute-0 sudo[225257]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:38 compute-0 python3.9[225226]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769038957.4596903-509-214600392706681/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:42:38 compute-0 sudo[225282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:42:38 compute-0 sudo[225282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:38 compute-0 sudo[225208]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:39 compute-0 podman[225371]: 2026-01-21 23:42:39.150908164 +0000 UTC m=+0.074258931 container create 1e25c6fa393b8e46fbd71b53af103f1bc9df86805d330179ef6dd2fc8d29071e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 21 23:42:39 compute-0 systemd[1]: Started libpod-conmon-1e25c6fa393b8e46fbd71b53af103f1bc9df86805d330179ef6dd2fc8d29071e.scope.
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:42:39
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'backups']
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:42:39 compute-0 podman[225371]: 2026-01-21 23:42:39.117681992 +0000 UTC m=+0.041032799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:42:39 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:42:39 compute-0 podman[225371]: 2026-01-21 23:42:39.253594918 +0000 UTC m=+0.176945735 container init 1e25c6fa393b8e46fbd71b53af103f1bc9df86805d330179ef6dd2fc8d29071e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:42:39 compute-0 podman[225371]: 2026-01-21 23:42:39.26550395 +0000 UTC m=+0.188854687 container start 1e25c6fa393b8e46fbd71b53af103f1bc9df86805d330179ef6dd2fc8d29071e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:42:39 compute-0 podman[225371]: 2026-01-21 23:42:39.269873453 +0000 UTC m=+0.193224290 container attach 1e25c6fa393b8e46fbd71b53af103f1bc9df86805d330179ef6dd2fc8d29071e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_gates, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 23:42:39 compute-0 dazzling_gates[225418]: 167 167
Jan 21 23:42:39 compute-0 systemd[1]: libpod-1e25c6fa393b8e46fbd71b53af103f1bc9df86805d330179ef6dd2fc8d29071e.scope: Deactivated successfully.
Jan 21 23:42:39 compute-0 podman[225371]: 2026-01-21 23:42:39.274423801 +0000 UTC m=+0.197774578 container died 1e25c6fa393b8e46fbd71b53af103f1bc9df86805d330179ef6dd2fc8d29071e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_gates, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:42:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a0dea7729876750c1bb37178a8931e0862e0043879510e7d52dff42064ac5ce-merged.mount: Deactivated successfully.
Jan 21 23:42:39 compute-0 podman[225371]: 2026-01-21 23:42:39.322733261 +0000 UTC m=+0.246083998 container remove 1e25c6fa393b8e46fbd71b53af103f1bc9df86805d330179ef6dd2fc8d29071e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_gates, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:42:39 compute-0 systemd[1]: libpod-conmon-1e25c6fa393b8e46fbd71b53af103f1bc9df86805d330179ef6dd2fc8d29071e.scope: Deactivated successfully.
Jan 21 23:42:39 compute-0 ceph-mon[74318]: pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:42:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:42:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:42:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:42:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:42:39 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:42:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:42:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:42:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:39.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:42:39 compute-0 podman[225509]: 2026-01-21 23:42:39.554664078 +0000 UTC m=+0.075643743 container create a2c13963ddab17ac78d5267d165a9792bfb85282aeb258b7fd9f6e8cdc301071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 21 23:42:39 compute-0 sudo[225550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcdghuslqxxtgesrhrfltfacwzyjqztl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038959.1721237-557-277542885355056/AnsiballZ_lineinfile.py'
Jan 21 23:42:39 compute-0 sudo[225550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:39 compute-0 systemd[1]: Started libpod-conmon-a2c13963ddab17ac78d5267d165a9792bfb85282aeb258b7fd9f6e8cdc301071.scope.
Jan 21 23:42:39 compute-0 podman[225509]: 2026-01-21 23:42:39.52648275 +0000 UTC m=+0.047462465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:42:39 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c68b19b3853737388182e5996123301c6aa06f0bc2722aa0d6800d63797cff6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c68b19b3853737388182e5996123301c6aa06f0bc2722aa0d6800d63797cff6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c68b19b3853737388182e5996123301c6aa06f0bc2722aa0d6800d63797cff6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c68b19b3853737388182e5996123301c6aa06f0bc2722aa0d6800d63797cff6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c68b19b3853737388182e5996123301c6aa06f0bc2722aa0d6800d63797cff6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:42:39 compute-0 podman[225509]: 2026-01-21 23:42:39.658413304 +0000 UTC m=+0.179393049 container init a2c13963ddab17ac78d5267d165a9792bfb85282aeb258b7fd9f6e8cdc301071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banzai, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 21 23:42:39 compute-0 podman[225509]: 2026-01-21 23:42:39.6713906 +0000 UTC m=+0.192370295 container start a2c13963ddab17ac78d5267d165a9792bfb85282aeb258b7fd9f6e8cdc301071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banzai, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:42:39 compute-0 podman[225509]: 2026-01-21 23:42:39.675579597 +0000 UTC m=+0.196559292 container attach a2c13963ddab17ac78d5267d165a9792bfb85282aeb258b7fd9f6e8cdc301071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 21 23:42:39 compute-0 python3.9[225555]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:42:39 compute-0 sudo[225550]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:40.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:40 compute-0 loving_banzai[225556]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:42:40 compute-0 loving_banzai[225556]: --> relative data size: 1.0
Jan 21 23:42:40 compute-0 loving_banzai[225556]: --> All data devices are unavailable
Jan 21 23:42:40 compute-0 systemd[1]: libpod-a2c13963ddab17ac78d5267d165a9792bfb85282aeb258b7fd9f6e8cdc301071.scope: Deactivated successfully.
Jan 21 23:42:40 compute-0 podman[225509]: 2026-01-21 23:42:40.58313796 +0000 UTC m=+1.104117645 container died a2c13963ddab17ac78d5267d165a9792bfb85282aeb258b7fd9f6e8cdc301071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banzai, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 21 23:42:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c68b19b3853737388182e5996123301c6aa06f0bc2722aa0d6800d63797cff6-merged.mount: Deactivated successfully.
Jan 21 23:42:40 compute-0 podman[225509]: 2026-01-21 23:42:40.649602031 +0000 UTC m=+1.170581696 container remove a2c13963ddab17ac78d5267d165a9792bfb85282aeb258b7fd9f6e8cdc301071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banzai, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 21 23:42:40 compute-0 systemd[1]: libpod-conmon-a2c13963ddab17ac78d5267d165a9792bfb85282aeb258b7fd9f6e8cdc301071.scope: Deactivated successfully.
Jan 21 23:42:40 compute-0 sudo[225282]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:40 compute-0 sudo[225704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:42:40 compute-0 sudo[225704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:40 compute-0 sudo[225704]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:40 compute-0 sudo[225766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaoebewjydbgacdsacpqudmqbyzfvjuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038960.058808-581-153435509821516/AnsiballZ_systemd.py'
Jan 21 23:42:40 compute-0 sudo[225766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:40 compute-0 sudo[225757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:42:40 compute-0 sudo[225757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:40 compute-0 sudo[225757]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:40 compute-0 sudo[225787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:42:40 compute-0 sudo[225787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:40 compute-0 sudo[225787]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:40 compute-0 sudo[225812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:42:40 compute-0 sudo[225812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:41 compute-0 python3.9[225783]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:42:41 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 21 23:42:41 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 21 23:42:41 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 21 23:42:41 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 21 23:42:41 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 21 23:42:41 compute-0 sudo[225766]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:41 compute-0 podman[225903]: 2026-01-21 23:42:41.376038964 +0000 UTC m=+0.066157304 container create 4959a511c207334d72906c7df24a0a262a6a0d9af22037a26f1694317da86e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 21 23:42:41 compute-0 ceph-mon[74318]: pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:41 compute-0 systemd[1]: Started libpod-conmon-4959a511c207334d72906c7df24a0a262a6a0d9af22037a26f1694317da86e96.scope.
Jan 21 23:42:41 compute-0 sudo[225916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:42:41 compute-0 sudo[225916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:41 compute-0 sudo[225916]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:41 compute-0 podman[225903]: 2026-01-21 23:42:41.343786652 +0000 UTC m=+0.033905072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:42:41 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:42:41 compute-0 podman[225903]: 2026-01-21 23:42:41.457830603 +0000 UTC m=+0.147948923 container init 4959a511c207334d72906c7df24a0a262a6a0d9af22037a26f1694317da86e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:42:41 compute-0 podman[225903]: 2026-01-21 23:42:41.466175097 +0000 UTC m=+0.156293397 container start 4959a511c207334d72906c7df24a0a262a6a0d9af22037a26f1694317da86e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 21 23:42:41 compute-0 podman[225903]: 2026-01-21 23:42:41.469146767 +0000 UTC m=+0.159265087 container attach 4959a511c207334d72906c7df24a0a262a6a0d9af22037a26f1694317da86e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 23:42:41 compute-0 tender_black[225941]: 167 167
Jan 21 23:42:41 compute-0 systemd[1]: libpod-4959a511c207334d72906c7df24a0a262a6a0d9af22037a26f1694317da86e96.scope: Deactivated successfully.
Jan 21 23:42:41 compute-0 podman[225903]: 2026-01-21 23:42:41.473023684 +0000 UTC m=+0.163141984 container died 4959a511c207334d72906c7df24a0a262a6a0d9af22037a26f1694317da86e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Jan 21 23:42:41 compute-0 sudo[225947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:42:41 compute-0 sudo[225947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:41 compute-0 sudo[225947]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1a52523ca933768ba1d12f0601691eb27d5deabb5e5c9f041cecb193ea42e37-merged.mount: Deactivated successfully.
Jan 21 23:42:41 compute-0 podman[225903]: 2026-01-21 23:42:41.512078583 +0000 UTC m=+0.202196883 container remove 4959a511c207334d72906c7df24a0a262a6a0d9af22037a26f1694317da86e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 21 23:42:41 compute-0 systemd[1]: libpod-conmon-4959a511c207334d72906c7df24a0a262a6a0d9af22037a26f1694317da86e96.scope: Deactivated successfully.
Jan 21 23:42:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:41.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:41 compute-0 podman[226068]: 2026-01-21 23:42:41.677116824 +0000 UTC m=+0.049155706 container create 196cb83da85aa6d3051a6e82f4ead1da4d7dfe3d6f59370c37423408a33e188a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 21 23:42:41 compute-0 systemd[1]: Started libpod-conmon-196cb83da85aa6d3051a6e82f4ead1da4d7dfe3d6f59370c37423408a33e188a.scope.
Jan 21 23:42:41 compute-0 podman[226068]: 2026-01-21 23:42:41.65759535 +0000 UTC m=+0.029634252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:42:41 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:42:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce6b7070a2e240a7a5581c60b4bb640f47d97472cf60ecfc8def621631bf6669/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:42:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce6b7070a2e240a7a5581c60b4bb640f47d97472cf60ecfc8def621631bf6669/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:42:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce6b7070a2e240a7a5581c60b4bb640f47d97472cf60ecfc8def621631bf6669/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:42:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce6b7070a2e240a7a5581c60b4bb640f47d97472cf60ecfc8def621631bf6669/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:42:41 compute-0 podman[226068]: 2026-01-21 23:42:41.807106929 +0000 UTC m=+0.179145841 container init 196cb83da85aa6d3051a6e82f4ead1da4d7dfe3d6f59370c37423408a33e188a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:42:41 compute-0 podman[226068]: 2026-01-21 23:42:41.81502284 +0000 UTC m=+0.187061752 container start 196cb83da85aa6d3051a6e82f4ead1da4d7dfe3d6f59370c37423408a33e188a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 21 23:42:41 compute-0 podman[226068]: 2026-01-21 23:42:41.819658871 +0000 UTC m=+0.191697833 container attach 196cb83da85aa6d3051a6e82f4ead1da4d7dfe3d6f59370c37423408a33e188a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:42:41 compute-0 sudo[226137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xksbrbosxkxopnzqbylpqwnnkwiahgxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038961.4697752-605-143985182956625/AnsiballZ_command.py'
Jan 21 23:42:41 compute-0 sudo[226137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:42 compute-0 python3.9[226141]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:42:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:42 compute-0 sudo[226137]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:42.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:42:42 compute-0 cool_rubin[226108]: {
Jan 21 23:42:42 compute-0 cool_rubin[226108]:     "1": [
Jan 21 23:42:42 compute-0 cool_rubin[226108]:         {
Jan 21 23:42:42 compute-0 cool_rubin[226108]:             "devices": [
Jan 21 23:42:42 compute-0 cool_rubin[226108]:                 "/dev/loop3"
Jan 21 23:42:42 compute-0 cool_rubin[226108]:             ],
Jan 21 23:42:42 compute-0 cool_rubin[226108]:             "lv_name": "ceph_lv0",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:             "lv_size": "7511998464",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:             "name": "ceph_lv0",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:             "tags": {
Jan 21 23:42:42 compute-0 cool_rubin[226108]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:                 "ceph.cluster_name": "ceph",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:                 "ceph.crush_device_class": "",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:                 "ceph.encrypted": "0",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:                 "ceph.osd_id": "1",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:                 "ceph.type": "block",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:                 "ceph.vdo": "0"
Jan 21 23:42:42 compute-0 cool_rubin[226108]:             },
Jan 21 23:42:42 compute-0 cool_rubin[226108]:             "type": "block",
Jan 21 23:42:42 compute-0 cool_rubin[226108]:             "vg_name": "ceph_vg0"
Jan 21 23:42:42 compute-0 cool_rubin[226108]:         }
Jan 21 23:42:42 compute-0 cool_rubin[226108]:     ]
Jan 21 23:42:42 compute-0 cool_rubin[226108]: }
Jan 21 23:42:42 compute-0 systemd[1]: libpod-196cb83da85aa6d3051a6e82f4ead1da4d7dfe3d6f59370c37423408a33e188a.scope: Deactivated successfully.
Jan 21 23:42:42 compute-0 podman[226068]: 2026-01-21 23:42:42.606536332 +0000 UTC m=+0.978575244 container died 196cb83da85aa6d3051a6e82f4ead1da4d7dfe3d6f59370c37423408a33e188a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 23:42:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce6b7070a2e240a7a5581c60b4bb640f47d97472cf60ecfc8def621631bf6669-merged.mount: Deactivated successfully.
Jan 21 23:42:42 compute-0 podman[226068]: 2026-01-21 23:42:42.674418438 +0000 UTC m=+1.046457320 container remove 196cb83da85aa6d3051a6e82f4ead1da4d7dfe3d6f59370c37423408a33e188a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 21 23:42:42 compute-0 systemd[1]: libpod-conmon-196cb83da85aa6d3051a6e82f4ead1da4d7dfe3d6f59370c37423408a33e188a.scope: Deactivated successfully.
Jan 21 23:42:42 compute-0 sudo[225812]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:42 compute-0 sudo[226260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:42:42 compute-0 sudo[226260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:42 compute-0 sudo[226260]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:42 compute-0 sudo[226355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjbdzxwbbyorbjukcbntikiogfxqempu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038962.5055122-635-54485912036198/AnsiballZ_stat.py'
Jan 21 23:42:42 compute-0 sudo[226355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:42 compute-0 sudo[226319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:42:42 compute-0 sudo[226319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:42 compute-0 sudo[226319]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:42 compute-0 sudo[226363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:42:42 compute-0 sudo[226363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:42 compute-0 sudo[226363]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:43 compute-0 sudo[226388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:42:43 compute-0 sudo[226388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:43 compute-0 python3.9[226361]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:42:43 compute-0 sudo[226355]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:43 compute-0 ceph-mon[74318]: pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:43 compute-0 podman[226479]: 2026-01-21 23:42:43.416523357 +0000 UTC m=+0.055242492 container create 5b26a38f01dd63e77b0239fdb2f3e8a57f26045b8391379815c4ec4dbe9acc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 21 23:42:43 compute-0 systemd[1]: Started libpod-conmon-5b26a38f01dd63e77b0239fdb2f3e8a57f26045b8391379815c4ec4dbe9acc50.scope.
Jan 21 23:42:43 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:42:43 compute-0 podman[226479]: 2026-01-21 23:42:43.398876689 +0000 UTC m=+0.037595794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:42:43 compute-0 podman[226479]: 2026-01-21 23:42:43.507993119 +0000 UTC m=+0.146712294 container init 5b26a38f01dd63e77b0239fdb2f3e8a57f26045b8391379815c4ec4dbe9acc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 23:42:43 compute-0 podman[226479]: 2026-01-21 23:42:43.521582103 +0000 UTC m=+0.160301248 container start 5b26a38f01dd63e77b0239fdb2f3e8a57f26045b8391379815c4ec4dbe9acc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:42:43 compute-0 charming_wing[226540]: 167 167
Jan 21 23:42:43 compute-0 podman[226479]: 2026-01-21 23:42:43.525787411 +0000 UTC m=+0.164506606 container attach 5b26a38f01dd63e77b0239fdb2f3e8a57f26045b8391379815c4ec4dbe9acc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:42:43 compute-0 systemd[1]: libpod-5b26a38f01dd63e77b0239fdb2f3e8a57f26045b8391379815c4ec4dbe9acc50.scope: Deactivated successfully.
Jan 21 23:42:43 compute-0 podman[226479]: 2026-01-21 23:42:43.530485944 +0000 UTC m=+0.169205079 container died 5b26a38f01dd63e77b0239fdb2f3e8a57f26045b8391379815c4ec4dbe9acc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:42:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:43.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-be9945d122efdf58960005fd4e6c4e05adf18fc1ca4307b3aa15c4356053ab41-merged.mount: Deactivated successfully.
Jan 21 23:42:43 compute-0 podman[226479]: 2026-01-21 23:42:43.577617348 +0000 UTC m=+0.216336473 container remove 5b26a38f01dd63e77b0239fdb2f3e8a57f26045b8391379815c4ec4dbe9acc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:42:43 compute-0 systemd[1]: libpod-conmon-5b26a38f01dd63e77b0239fdb2f3e8a57f26045b8391379815c4ec4dbe9acc50.scope: Deactivated successfully.
Jan 21 23:42:43 compute-0 sudo[226641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ompkcgsnmbkwzdktpzoafecdahpaycdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038963.3995066-662-50196681898071/AnsiballZ_stat.py'
Jan 21 23:42:43 compute-0 sudo[226641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:43 compute-0 podman[226644]: 2026-01-21 23:42:43.818653701 +0000 UTC m=+0.062994137 container create ebd807b0f7660106b15ab822db4884b967de063b521b7baa9a9da841c7e1f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:42:43 compute-0 systemd[1]: Started libpod-conmon-ebd807b0f7660106b15ab822db4884b967de063b521b7baa9a9da841c7e1f107.scope.
Jan 21 23:42:43 compute-0 podman[226644]: 2026-01-21 23:42:43.790915978 +0000 UTC m=+0.035256424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:42:43 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29130457d67ce68e45e5eb3633d092d66bd3a37ec347d792d387904533f46ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29130457d67ce68e45e5eb3633d092d66bd3a37ec347d792d387904533f46ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29130457d67ce68e45e5eb3633d092d66bd3a37ec347d792d387904533f46ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29130457d67ce68e45e5eb3633d092d66bd3a37ec347d792d387904533f46ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:42:43 compute-0 podman[226644]: 2026-01-21 23:42:43.922654976 +0000 UTC m=+0.166995402 container init ebd807b0f7660106b15ab822db4884b967de063b521b7baa9a9da841c7e1f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 23:42:43 compute-0 podman[226644]: 2026-01-21 23:42:43.935386004 +0000 UTC m=+0.179726440 container start ebd807b0f7660106b15ab822db4884b967de063b521b7baa9a9da841c7e1f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 23:42:43 compute-0 podman[226644]: 2026-01-21 23:42:43.939349234 +0000 UTC m=+0.183689650 container attach ebd807b0f7660106b15ab822db4884b967de063b521b7baa9a9da841c7e1f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 21 23:42:43 compute-0 python3.9[226655]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:42:43 compute-0 sudo[226641]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:44.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:44 compute-0 sudo[226787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plsyrrydfgeyymmonrdkollizhbawiby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038963.3995066-662-50196681898071/AnsiballZ_copy.py'
Jan 21 23:42:44 compute-0 sudo[226787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:44 compute-0 python3.9[226789]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769038963.3995066-662-50196681898071/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:42:44 compute-0 sudo[226787]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:44 compute-0 adoring_neumann[226662]: {
Jan 21 23:42:44 compute-0 adoring_neumann[226662]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:42:44 compute-0 adoring_neumann[226662]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:42:44 compute-0 adoring_neumann[226662]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:42:44 compute-0 adoring_neumann[226662]:         "osd_id": 1,
Jan 21 23:42:44 compute-0 adoring_neumann[226662]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:42:44 compute-0 adoring_neumann[226662]:         "type": "bluestore"
Jan 21 23:42:44 compute-0 adoring_neumann[226662]:     }
Jan 21 23:42:44 compute-0 adoring_neumann[226662]: }
Jan 21 23:42:44 compute-0 systemd[1]: libpod-ebd807b0f7660106b15ab822db4884b967de063b521b7baa9a9da841c7e1f107.scope: Deactivated successfully.
Jan 21 23:42:44 compute-0 podman[226644]: 2026-01-21 23:42:44.902082145 +0000 UTC m=+1.146422541 container died ebd807b0f7660106b15ab822db4884b967de063b521b7baa9a9da841c7e1f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:42:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c29130457d67ce68e45e5eb3633d092d66bd3a37ec347d792d387904533f46ec-merged.mount: Deactivated successfully.
Jan 21 23:42:44 compute-0 podman[226644]: 2026-01-21 23:42:44.9761637 +0000 UTC m=+1.220504106 container remove ebd807b0f7660106b15ab822db4884b967de063b521b7baa9a9da841c7e1f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_neumann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:42:44 compute-0 systemd[1]: libpod-conmon-ebd807b0f7660106b15ab822db4884b967de063b521b7baa9a9da841c7e1f107.scope: Deactivated successfully.
Jan 21 23:42:45 compute-0 sudo[226388]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:42:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:42:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:42:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:42:45 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 0a37ca27-ed25-4f08-9afc-d30f19b9702f does not exist
Jan 21 23:42:45 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev f3c4d6eb-07cb-45da-add8-42a76f2aa216 does not exist
Jan 21 23:42:45 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 3df13d98-b21a-4667-9c69-97d4b09cecf5 does not exist
Jan 21 23:42:45 compute-0 sudo[226919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:42:45 compute-0 sudo[226919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:45 compute-0 sudo[226919]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:45 compute-0 sudo[226968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:42:45 compute-0 sudo[226968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:42:45 compute-0 sudo[226968]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:45 compute-0 sudo[227019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozwsaskcocoyozvfmocectcqtzufdqpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038964.8976507-707-46910123633756/AnsiballZ_command.py'
Jan 21 23:42:45 compute-0 sudo[227019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:45 compute-0 ceph-mon[74318]: pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:42:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:42:45 compute-0 python3.9[227021]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:42:45 compute-0 sudo[227019]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:45.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:46 compute-0 sudo[227173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvbcfwsjkmmyydxxfntxmytovilxeglh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038965.7368348-731-275471115915666/AnsiballZ_lineinfile.py'
Jan 21 23:42:46 compute-0 sudo[227173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:46.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:46 compute-0 python3.9[227175]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:42:46 compute-0 sudo[227173]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:46 compute-0 ceph-mon[74318]: pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:47 compute-0 sudo[227325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryslnoiegyemdmybygkepexzudgzjiqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038966.540769-755-252745440364438/AnsiballZ_replace.py'
Jan 21 23:42:47 compute-0 sudo[227325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:47 compute-0 python3.9[227327]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:42:47 compute-0 sudo[227325]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:42:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:47.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:47 compute-0 sudo[227478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atcaitmfxyxofnkhmtgaeicptkgubaxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038967.495526-779-124572170069614/AnsiballZ_replace.py'
Jan 21 23:42:47 compute-0 sudo[227478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:48 compute-0 python3.9[227480]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:42:48 compute-0 sudo[227478]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:48.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:48 compute-0 sudo[227630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prisqeywkozkscgjbzbsnzfjldrtinry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038968.3548822-806-5780801960029/AnsiballZ_lineinfile.py'
Jan 21 23:42:48 compute-0 sudo[227630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:42:48.736 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:42:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:42:48.739 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:42:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:42:48.740 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:42:48 compute-0 python3.9[227632]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:42:48 compute-0 sudo[227630]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:49 compute-0 ceph-mon[74318]: pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:49 compute-0 sudo[227783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtzxluojymhfqnjpcjejtybocpgpsxrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038969.087142-806-235679546820644/AnsiballZ_lineinfile.py'
Jan 21 23:42:49 compute-0 sudo[227783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:49.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:49 compute-0 python3.9[227785]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:42:49 compute-0 sudo[227783]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:50 compute-0 sudo[227935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oruudfjjnmotciuixzeaxaudjcgrowbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038969.7873478-806-17028429589053/AnsiballZ_lineinfile.py'
Jan 21 23:42:50 compute-0 sudo[227935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:50.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:50 compute-0 python3.9[227937]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:42:50 compute-0 sudo[227935]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:50 compute-0 sudo[228087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fetwyxpdwkvwrypekopqpmcwacglvtws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038970.5745082-806-51895836288681/AnsiballZ_lineinfile.py'
Jan 21 23:42:50 compute-0 sudo[228087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:51 compute-0 python3.9[228089]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:42:51 compute-0 sudo[228087]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:51 compute-0 ceph-mon[74318]: pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:51.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:51 compute-0 sudo[228240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxffrbpxskvfovpzpykdubqsyfwpvitd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038971.310694-893-180499719387040/AnsiballZ_stat.py'
Jan 21 23:42:51 compute-0 sudo[228240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:51 compute-0 python3.9[228242]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:42:51 compute-0 sudo[228240]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:42:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:52.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3827 writes, 16K keys, 3826 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3827 writes, 3826 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1423 writes, 5776 keys, 1423 commit groups, 1.0 writes per commit group, ingest: 9.96 MB, 0.02 MB/s
                                           Interval WAL: 1423 writes, 1423 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     87.9      0.22              0.06         7    0.032       0      0       0.0       0.0
                                             L6      1/0    7.61 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.6    116.1     93.9      0.54              0.19         6    0.090     26K   3348       0.0       0.0
                                            Sum      1/0    7.61 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6     82.1     92.2      0.76              0.26        13    0.058     26K   3348       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.3    100.9    101.5      0.34              0.15         6    0.057     14K   2046       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    116.1     93.9      0.54              0.19         6    0.090     26K   3348       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     89.5      0.22              0.06         6    0.036       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.019, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.07 GB write, 0.06 MB/s write, 0.06 GB read, 0.05 MB/s read, 0.8 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f1db2f1f0#2 capacity: 304.00 MB usage: 2.27 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000118 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(104,2.02 MB,0.665594%) FilterBlock(14,83.05 KB,0.0266778%) IndexBlock(14,169.53 KB,0.0544598%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 21 23:42:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:52 compute-0 sudo[228394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kokoolibulcrvsmyziriietrgoburajo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038972.0653033-917-188147599289737/AnsiballZ_command.py'
Jan 21 23:42:52 compute-0 sudo[228394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:42:52.530467) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038972530506, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1154, "num_deletes": 255, "total_data_size": 2011345, "memory_usage": 2045048, "flush_reason": "Manual Compaction"}
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038972549883, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1979553, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15773, "largest_seqno": 16925, "table_properties": {"data_size": 1974079, "index_size": 2933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10850, "raw_average_key_size": 18, "raw_value_size": 1963156, "raw_average_value_size": 3344, "num_data_blocks": 134, "num_entries": 587, "num_filter_entries": 587, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769038858, "oldest_key_time": 1769038858, "file_creation_time": 1769038972, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 19571 microseconds, and 6851 cpu microseconds.
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:42:52.550035) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1979553 bytes OK
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:42:52.550093) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:42:52.552477) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:42:52.552496) EVENT_LOG_v1 {"time_micros": 1769038972552490, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:42:52.552514) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2006240, prev total WAL file size 2006240, number of live WAL files 2.
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:42:52.553483) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1933KB)], [35(7796KB)]
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038972553616, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9963072, "oldest_snapshot_seqno": -1}
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4282 keys, 9606163 bytes, temperature: kUnknown
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038972623227, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 9606163, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9574733, "index_size": 19608, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 106760, "raw_average_key_size": 24, "raw_value_size": 9494375, "raw_average_value_size": 2217, "num_data_blocks": 817, "num_entries": 4282, "num_filter_entries": 4282, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769038972, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:42:52 compute-0 python3.9[228396]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:42:52.623430) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 9606163 bytes
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:42:52.641094) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.0 rd, 137.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.6 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(9.9) write-amplify(4.9) OK, records in: 4805, records dropped: 523 output_compression: NoCompression
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:42:52.641141) EVENT_LOG_v1 {"time_micros": 1769038972641123, "job": 16, "event": "compaction_finished", "compaction_time_micros": 69669, "compaction_time_cpu_micros": 21409, "output_level": 6, "num_output_files": 1, "total_output_size": 9606163, "num_input_records": 4805, "num_output_records": 4282, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038972642065, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038972644591, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:42:52.553338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:42:52.644824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:42:52.644831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:42:52.644835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:42:52.644838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:42:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:42:52.644841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:42:52 compute-0 sudo[228394]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:52 compute-0 podman[228422]: 2026-01-21 23:42:52.961680142 +0000 UTC m=+0.072377113 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 21 23:42:53 compute-0 ceph-mon[74318]: pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:53 compute-0 sudo[228568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrpynzkdsedbmysdiuvqtniqblbnpczw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038973.0606072-944-177400437928074/AnsiballZ_systemd_service.py'
Jan 21 23:42:53 compute-0 sudo[228568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:53.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:53 compute-0 python3.9[228570]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:42:53 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 21 23:42:53 compute-0 sudo[228568]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:54.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:42:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:54 compute-0 sudo[228724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocpsnigyvxdxkvoqbezosuncdenepcvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038974.1248732-968-164661006345938/AnsiballZ_systemd_service.py'
Jan 21 23:42:54 compute-0 sudo[228724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:54 compute-0 python3.9[228726]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:42:54 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 21 23:42:54 compute-0 udevadm[228731]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 21 23:42:54 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 21 23:42:54 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 21 23:42:54 compute-0 multipathd[228734]: --------start up--------
Jan 21 23:42:54 compute-0 multipathd[228734]: read /etc/multipath.conf
Jan 21 23:42:54 compute-0 multipathd[228734]: path checkers start up
Jan 21 23:42:55 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 21 23:42:55 compute-0 sudo[228724]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:55 compute-0 ceph-mon[74318]: pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:42:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:55.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:42:55 compute-0 sudo[228892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlxpnagvovlzpzdhqhpuflkwkhfuctqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038975.6049707-1004-125531729101782/AnsiballZ_file.py'
Jan 21 23:42:55 compute-0 sudo[228892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:56 compute-0 python3.9[228894]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 21 23:42:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:56.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:56 compute-0 sudo[228892]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:56 compute-0 sudo[229044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtmobzrhtelayvnhflgcrecleybgshto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038976.4783325-1028-214748773072073/AnsiballZ_modprobe.py'
Jan 21 23:42:56 compute-0 sudo[229044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:57 compute-0 python3.9[229046]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 21 23:42:57 compute-0 kernel: Key type psk registered
Jan 21 23:42:57 compute-0 sudo[229044]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:57 compute-0 ceph-mon[74318]: pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:42:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:42:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:57.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:42:57 compute-0 sudo[229208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzckchueczbpvhiegcqctgmryisbraiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038977.3754218-1052-132370952204729/AnsiballZ_stat.py'
Jan 21 23:42:57 compute-0 sudo[229208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:57 compute-0 python3.9[229210]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:42:57 compute-0 sudo[229208]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:42:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:42:58.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:42:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:58 compute-0 sudo[229331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsszvctiehteymuqpoglhctwaeedqdrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038977.3754218-1052-132370952204729/AnsiballZ_copy.py'
Jan 21 23:42:58 compute-0 sudo[229331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:58 compute-0 python3.9[229333]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769038977.3754218-1052-132370952204729/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:42:58 compute-0 sudo[229331]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:59 compute-0 sudo[229483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kffaxxaxemwsyatchcvvwcappchnzvxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038979.001151-1100-200212208145903/AnsiballZ_lineinfile.py'
Jan 21 23:42:59 compute-0 sudo[229483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:42:59 compute-0 ceph-mon[74318]: pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:42:59 compute-0 python3.9[229485]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:42:59 compute-0 sudo[229483]: pam_unix(sudo:session): session closed for user root
Jan 21 23:42:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:42:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:42:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:42:59.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:43:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:00.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:00 compute-0 sudo[229636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izwmblzbpykexersdpcfqgzywxlhbgeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038979.8150303-1124-208147866852645/AnsiballZ_systemd.py'
Jan 21 23:43:00 compute-0 sudo[229636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:00 compute-0 python3.9[229638]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:43:00 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 21 23:43:00 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 21 23:43:00 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 21 23:43:00 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 21 23:43:00 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 21 23:43:00 compute-0 sudo[229636]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:01 compute-0 sudo[229792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhiwdnnbfnszqhldimiypmpaaqbuwatj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038980.9797935-1148-182394266332848/AnsiballZ_dnf.py'
Jan 21 23:43:01 compute-0 sudo[229792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:01 compute-0 ceph-mon[74318]: pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:01 compute-0 python3.9[229794]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 23:43:01 compute-0 sudo[229796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:43:01 compute-0 sudo[229796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:01 compute-0 sudo[229796]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:01.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:01 compute-0 sudo[229822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:43:01 compute-0 sudo[229822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:01 compute-0 sudo[229822]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:02.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:02 compute-0 ceph-mon[74318]: pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:43:02.463091) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038982463157, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 335, "num_deletes": 251, "total_data_size": 174883, "memory_usage": 181032, "flush_reason": "Manual Compaction"}
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038982466440, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 173316, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16926, "largest_seqno": 17260, "table_properties": {"data_size": 171194, "index_size": 286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5317, "raw_average_key_size": 18, "raw_value_size": 167065, "raw_average_value_size": 580, "num_data_blocks": 13, "num_entries": 288, "num_filter_entries": 288, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769038973, "oldest_key_time": 1769038973, "file_creation_time": 1769038982, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 3386 microseconds, and 1313 cpu microseconds.
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:43:02.466487) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 173316 bytes OK
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:43:02.466502) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:43:02.468047) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:43:02.468073) EVENT_LOG_v1 {"time_micros": 1769038982468068, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:43:02.468086) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 172599, prev total WAL file size 172599, number of live WAL files 2.
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:43:02.468508) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(169KB)], [38(9381KB)]
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038982468688, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 9779479, "oldest_snapshot_seqno": -1}
Jan 21 23:43:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4060 keys, 7746439 bytes, temperature: kUnknown
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038982536968, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 7746439, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7718116, "index_size": 17077, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 102823, "raw_average_key_size": 25, "raw_value_size": 7643211, "raw_average_value_size": 1882, "num_data_blocks": 702, "num_entries": 4060, "num_filter_entries": 4060, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769038982, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:43:02.538270) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7746439 bytes
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:43:02.539611) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.1 rd, 113.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.2 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(101.1) write-amplify(44.7) OK, records in: 4570, records dropped: 510 output_compression: NoCompression
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:43:02.539629) EVENT_LOG_v1 {"time_micros": 1769038982539620, "job": 18, "event": "compaction_finished", "compaction_time_micros": 68349, "compaction_time_cpu_micros": 35783, "output_level": 6, "num_output_files": 1, "total_output_size": 7746439, "num_input_records": 4570, "num_output_records": 4060, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038982539816, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769038982541573, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:43:02.468386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:43:02.541668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:43:02.541673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:43:02.541675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:43:02.541677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:43:02 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:43:02.541678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:43:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:43:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:03.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:43:03 compute-0 systemd[1]: Reloading.
Jan 21 23:43:03 compute-0 systemd-rc-local-generator[229879]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:43:03 compute-0 systemd-sysv-generator[229882]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:43:03 compute-0 systemd[1]: Reloading.
Jan 21 23:43:04 compute-0 systemd-rc-local-generator[229913]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:43:04 compute-0 systemd-sysv-generator[229918]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:43:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:04.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:04 compute-0 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 21 23:43:04 compute-0 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 21 23:43:04 compute-0 lvm[229964]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 23:43:04 compute-0 lvm[229964]: VG ceph_vg0 finished
Jan 21 23:43:04 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 23:43:04 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 23:43:04 compute-0 systemd[1]: Reloading.
Jan 21 23:43:04 compute-0 systemd-rc-local-generator[230011]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:43:04 compute-0 systemd-sysv-generator[230014]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:43:04 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 23:43:05 compute-0 ceph-mon[74318]: pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:05 compute-0 sudo[229792]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:05.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:06 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 23:43:06 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 23:43:06 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.669s CPU time.
Jan 21 23:43:06 compute-0 systemd[1]: run-rd6d359adcf7b47cf9d02ebf12b43cb24.service: Deactivated successfully.
Jan 21 23:43:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:06.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:07 compute-0 sudo[231314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmbogfqmrbpjabdusrxpubskbxjlckek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038986.6854827-1172-195854033221535/AnsiballZ_systemd_service.py'
Jan 21 23:43:07 compute-0 sudo[231314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:07 compute-0 podman[231316]: 2026-01-21 23:43:07.199479564 +0000 UTC m=+0.129306645 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 23:43:07 compute-0 ceph-mon[74318]: pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:07 compute-0 python3.9[231317]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:43:07 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 21 23:43:07 compute-0 iscsid[223843]: iscsid shutting down.
Jan 21 23:43:07 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 21 23:43:07 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 21 23:43:07 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 21 23:43:07 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 21 23:43:07 compute-0 systemd[1]: Started Open-iSCSI.
Jan 21 23:43:07 compute-0 sudo[231314]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:43:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:07.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:43:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:08.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:43:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:08 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 21 23:43:08 compute-0 sudo[231498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kasjcluqhrbgjohlowszndwnubsfjnro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038988.2598093-1196-173098079508908/AnsiballZ_systemd_service.py'
Jan 21 23:43:08 compute-0 sudo[231498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:08 compute-0 python3.9[231500]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:43:09 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 21 23:43:09 compute-0 multipathd[228734]: exit (signal)
Jan 21 23:43:09 compute-0 multipathd[228734]: --------shut down-------
Jan 21 23:43:09 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 21 23:43:09 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 21 23:43:09 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 21 23:43:09 compute-0 multipathd[231506]: --------start up--------
Jan 21 23:43:09 compute-0 multipathd[231506]: read /etc/multipath.conf
Jan 21 23:43:09 compute-0 multipathd[231506]: path checkers start up
Jan 21 23:43:09 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 21 23:43:09 compute-0 sudo[231498]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:43:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:43:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:43:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:43:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:43:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:43:09 compute-0 ceph-mon[74318]: pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:09 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 21 23:43:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:09.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:10.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:10 compute-0 python3.9[231665]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 23:43:11 compute-0 sudo[231819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcfbvwmcbyfavpamjhcmoxfducqgrvzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038990.7527885-1248-131787981335923/AnsiballZ_file.py'
Jan 21 23:43:11 compute-0 sudo[231819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:11 compute-0 python3.9[231821]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:43:11 compute-0 sudo[231819]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:11 compute-0 ceph-mon[74318]: pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:11.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:12 compute-0 sudo[231972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfgmklexwjhwvgohudishsunjqceqaqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769038991.7947073-1281-244639671349541/AnsiballZ_systemd_service.py'
Jan 21 23:43:12 compute-0 sudo[231972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:12.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:12 compute-0 python3.9[231974]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 23:43:12 compute-0 systemd[1]: Reloading.
Jan 21 23:43:12 compute-0 systemd-rc-local-generator[232000]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:43:12 compute-0 systemd-sysv-generator[232004]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:43:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:43:12 compute-0 sudo[231972]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:13 compute-0 ceph-mon[74318]: pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:13.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:13 compute-0 python3.9[232159]: ansible-ansible.builtin.service_facts Invoked
Jan 21 23:43:13 compute-0 network[232176]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 23:43:13 compute-0 network[232177]: 'network-scripts' will be removed from distribution in near future.
Jan 21 23:43:13 compute-0 network[232178]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 23:43:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:14.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:15 compute-0 ceph-mon[74318]: pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:43:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:15.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:43:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:43:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:16.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:43:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:17 compute-0 ceph-mon[74318]: pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:43:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:17.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:18.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:18 compute-0 ceph-mon[74318]: pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:43:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:19.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:43:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:20.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:21 compute-0 ceph-mon[74318]: pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:21 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 21 23:43:21 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 21 23:43:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:21.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:21 compute-0 sudo[232330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:43:21 compute-0 sudo[232330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:21 compute-0 sudo[232330]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:21 compute-0 sudo[232355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:43:21 compute-0 sudo[232355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:21 compute-0 sudo[232355]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:22.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:22 compute-0 sudo[232505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkldffyogbfsgoldeogqwqbjjkmveiut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039001.8778965-1338-216122069485428/AnsiballZ_systemd_service.py'
Jan 21 23:43:22 compute-0 sudo[232505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:22 compute-0 python3.9[232507]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:43:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:43:22 compute-0 sudo[232505]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:23 compute-0 sudo[232671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctdkwohkelektfjxbwogeeympaeeckwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039002.7380164-1338-38028709070755/AnsiballZ_systemd_service.py'
Jan 21 23:43:23 compute-0 sudo[232671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:23 compute-0 podman[232632]: 2026-01-21 23:43:23.068357394 +0000 UTC m=+0.062657507 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 21 23:43:23 compute-0 ceph-mon[74318]: pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:23 compute-0 python3.9[232679]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:43:23 compute-0 sudo[232671]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:23.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:23 compute-0 sudo[232831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoxturrbcscpkkmlhulfdqgboujmcpnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039003.5579405-1338-271391352287776/AnsiballZ_systemd_service.py'
Jan 21 23:43:23 compute-0 sudo[232831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:24.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:24 compute-0 python3.9[232833]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:43:24 compute-0 sudo[232831]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:24 compute-0 sudo[232984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zijmrutdlsbnaossfphrjmctjgnessup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039004.403691-1338-69968350744313/AnsiballZ_systemd_service.py'
Jan 21 23:43:24 compute-0 sudo[232984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:25 compute-0 python3.9[232986]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:43:25 compute-0 sudo[232984]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:25 compute-0 ceph-mon[74318]: pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:25 compute-0 sudo[233138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtcmqxqniuggtxbhdwigowqivhcnbjbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039005.2219856-1338-212760130590493/AnsiballZ_systemd_service.py'
Jan 21 23:43:25 compute-0 sudo[233138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:25.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:25 compute-0 python3.9[233140]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:43:25 compute-0 sudo[233138]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:26.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:26 compute-0 sudo[233291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnslhatwrifzhyqbstkqzvadlahrukvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039006.1036234-1338-215810762005193/AnsiballZ_systemd_service.py'
Jan 21 23:43:26 compute-0 sudo[233291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:26 compute-0 python3.9[233293]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:43:26 compute-0 sudo[233291]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:27 compute-0 ceph-mon[74318]: pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:27 compute-0 sudo[233444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbuldlhfadztkyxwlkqfdrlhblrkmxcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039006.9691677-1338-275054232128511/AnsiballZ_systemd_service.py'
Jan 21 23:43:27 compute-0 sudo[233444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:43:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:27.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:27 compute-0 python3.9[233446]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:43:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:43:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:28.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:43:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:28 compute-0 sudo[233444]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:29 compute-0 sudo[233598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoxvzzpwkvoftrynsrxjsixrixfojrpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039008.855889-1338-248476302532522/AnsiballZ_systemd_service.py'
Jan 21 23:43:29 compute-0 sudo[233598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:29 compute-0 ceph-mon[74318]: pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:29 compute-0 python3.9[233600]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:43:29 compute-0 sudo[233598]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:29.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:30.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:31 compute-0 ceph-mon[74318]: pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:43:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:31.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:43:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:32.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:32 compute-0 sudo[233753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzntpgedgegtklotppmkfppxrcvtwnsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039012.0404208-1515-227532837725816/AnsiballZ_file.py'
Jan 21 23:43:32 compute-0 sudo[233753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:43:32 compute-0 python3.9[233755]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:43:32 compute-0 sudo[233753]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:33 compute-0 sudo[233905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjrmjxpystwcvjtzvwcgxnzjwfallpxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039012.7946482-1515-14249476597235/AnsiballZ_file.py'
Jan 21 23:43:33 compute-0 sudo[233905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:33 compute-0 python3.9[233907]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:43:33 compute-0 sudo[233905]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:33 compute-0 ceph-mon[74318]: pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:33.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:33 compute-0 sudo[234058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzbzuhrhtfzijzthrxfgjifdlnifxfns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039013.474419-1515-221201504266556/AnsiballZ_file.py'
Jan 21 23:43:33 compute-0 sudo[234058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:34 compute-0 python3.9[234060]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:43:34 compute-0 sudo[234058]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:34.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:34 compute-0 sudo[234210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xexpncsynjbeggqilfqeihwmdliyaozd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039014.247451-1515-190186249709442/AnsiballZ_file.py'
Jan 21 23:43:34 compute-0 sudo[234210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:34 compute-0 python3.9[234212]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:43:34 compute-0 sudo[234210]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:35 compute-0 sudo[234362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inqwwbtfxmscdgrpcgcygvdnjqgvsszl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039014.991098-1515-72035033327189/AnsiballZ_file.py'
Jan 21 23:43:35 compute-0 sudo[234362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:35 compute-0 ceph-mon[74318]: pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:35 compute-0 python3.9[234364]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:43:35 compute-0 sudo[234362]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:35.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:36 compute-0 sudo[234515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsbaglidgustfavioailhwytdegcfaub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039015.6963682-1515-46277673170827/AnsiballZ_file.py'
Jan 21 23:43:36 compute-0 sudo[234515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:36.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:36 compute-0 python3.9[234517]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:43:36 compute-0 sudo[234515]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:36 compute-0 sudo[234667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhuwmlayuxtdofqgyrjoplqcnxanndrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039016.5063844-1515-232134503463662/AnsiballZ_file.py'
Jan 21 23:43:36 compute-0 sudo[234667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:37 compute-0 python3.9[234669]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:43:37 compute-0 sudo[234667]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:37 compute-0 ceph-mon[74318]: pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:43:37 compute-0 sudo[234837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktdrbkelkqacqwmeqxgxqlzbwuvyipno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039017.2746363-1515-106473270447250/AnsiballZ_file.py'
Jan 21 23:43:37 compute-0 sudo[234837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:37.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:37 compute-0 podman[234794]: 2026-01-21 23:43:37.645412327 +0000 UTC m=+0.111337138 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 21 23:43:37 compute-0 python3.9[234842]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:43:37 compute-0 sudo[234837]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:38.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:38 compute-0 sudo[234998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiaokdxglloseffvnwkcdilpqismwexg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039018.0377176-1686-124163258193791/AnsiballZ_file.py'
Jan 21 23:43:38 compute-0 sudo[234998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:38 compute-0 python3.9[235000]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:43:38 compute-0 sudo[234998]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:39 compute-0 sudo[235150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeccwffliimqmiomogehjiqsxfgkbsea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039018.8285577-1686-223515848799779/AnsiballZ_file.py'
Jan 21 23:43:39 compute-0 sudo[235150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:43:39
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['vms', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'backups', '.rgw.root']
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:43:39 compute-0 python3.9[235152]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:43:39 compute-0 sudo[235150]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:43:39 compute-0 ceph-mon[74318]: pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:43:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:39.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:39 compute-0 sudo[235303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oddpepvcdazumadmmxeldoiakarrsvgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039019.5552006-1686-31316179655101/AnsiballZ_file.py'
Jan 21 23:43:39 compute-0 sudo[235303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:40 compute-0 python3.9[235305]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:43:40 compute-0 sudo[235303]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:43:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:40.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:43:40 compute-0 ceph-mon[74318]: pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:40 compute-0 sudo[235455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cutjhjweszqnkwpiksqfzrehmvjutitf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039020.207413-1686-177864113131277/AnsiballZ_file.py'
Jan 21 23:43:40 compute-0 sudo[235455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:40 compute-0 python3.9[235457]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:43:40 compute-0 sudo[235455]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:41 compute-0 sudo[235607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pesllolivfkynjrndrlicyfrcaszfoiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039020.9045558-1686-165480908194340/AnsiballZ_file.py'
Jan 21 23:43:41 compute-0 sudo[235607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:41 compute-0 python3.9[235609]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:43:41 compute-0 sudo[235607]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:41.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:41 compute-0 sudo[235716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:43:41 compute-0 sudo[235716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:41 compute-0 sudo[235716]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:41 compute-0 sudo[235791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znekjljbxqfjyqrfwwpzwmiyelfeuxms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039021.634824-1686-158502275323038/AnsiballZ_file.py'
Jan 21 23:43:41 compute-0 sudo[235791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:41 compute-0 sudo[235780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:43:41 compute-0 sudo[235780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:41 compute-0 sudo[235780]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:42 compute-0 python3.9[235810]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:43:42 compute-0 sudo[235791]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:42.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:43:42 compute-0 sudo[235962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfzonjvpiuymmimqiylqorggryffthsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039022.2996364-1686-258829260399629/AnsiballZ_file.py'
Jan 21 23:43:42 compute-0 sudo[235962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:42 compute-0 python3.9[235964]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:43:42 compute-0 sudo[235962]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:43 compute-0 ceph-mon[74318]: pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:43 compute-0 sudo[236114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jexhzscuvrseiichbgpsbcputsqdfdic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039022.9396048-1686-259781147828648/AnsiballZ_file.py'
Jan 21 23:43:43 compute-0 sudo[236114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:43 compute-0 python3.9[236116]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:43:43 compute-0 sudo[236114]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:43.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:44.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:44 compute-0 sudo[236267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwckxpqosyjolxblkebqxdssrwfwxdpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039024.0123765-1860-258008804306578/AnsiballZ_command.py'
Jan 21 23:43:44 compute-0 sudo[236267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:44 compute-0 python3.9[236269]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:43:44 compute-0 sudo[236267]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:45 compute-0 ceph-mon[74318]: pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:45 compute-0 python3.9[236421]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 23:43:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:45.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:45 compute-0 sudo[236447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:43:45 compute-0 sudo[236447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:45 compute-0 sudo[236447]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:45 compute-0 sudo[236472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:43:45 compute-0 sudo[236472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:45 compute-0 sudo[236472]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:45 compute-0 sudo[236502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:43:45 compute-0 sudo[236502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:45 compute-0 sudo[236502]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:45 compute-0 sudo[236554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:43:45 compute-0 sudo[236554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:46 compute-0 sudo[236687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iosnwmoyjsqhtzqynaruwezjszzdlzuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039025.852279-1914-249651137470855/AnsiballZ_systemd_service.py'
Jan 21 23:43:46 compute-0 sudo[236687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:43:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:46.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:43:46 compute-0 python3.9[236689]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 23:43:46 compute-0 sudo[236554]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:46 compute-0 systemd[1]: Reloading.
Jan 21 23:43:46 compute-0 systemd-rc-local-generator[236732]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:43:46 compute-0 systemd-sysv-generator[236737]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:43:46 compute-0 sudo[236687]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:47 compute-0 ceph-mon[74318]: pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:47 compute-0 sudo[236892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeeqnzzozeqgguwtswzzcxrxebmkeuiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039027.1112292-1938-154052565038335/AnsiballZ_command.py'
Jan 21 23:43:47 compute-0 sudo[236892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:43:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:47.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:47 compute-0 python3.9[236894]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:43:47 compute-0 sudo[236892]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:43:48 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:43:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:43:48 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:43:48 compute-0 sudo[237045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlqxhilhszkopztsollassbefxiuvsgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039027.8467479-1938-205906677788922/AnsiballZ_command.py'
Jan 21 23:43:48 compute-0 sudo[237045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:48.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:48 compute-0 python3.9[237047]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:43:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:43:48.738 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:43:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:43:48.740 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:43:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:43:48.740 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:43:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:43:48 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:43:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:43:48 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:43:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:43:48 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:43:48 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev e55a880c-997b-4578-ac6c-4cb18b98cf1b does not exist
Jan 21 23:43:48 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 95cd04a9-3042-4835-b03e-432381c1a782 does not exist
Jan 21 23:43:48 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev e50e6c25-164d-43a4-8e7d-68efdfd7e91f does not exist
Jan 21 23:43:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:43:48 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:43:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:43:48 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:43:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:43:48 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:43:48 compute-0 sudo[237049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:43:48 compute-0 sudo[237049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:48 compute-0 sudo[237049]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:43:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:43:49 compute-0 ceph-mon[74318]: pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:43:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:43:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:43:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:43:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:43:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:43:49 compute-0 sudo[237074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:43:49 compute-0 sudo[237074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:49 compute-0 sudo[237074]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:49 compute-0 sudo[237099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:43:49 compute-0 sudo[237099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:49 compute-0 sudo[237099]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:49 compute-0 sudo[237124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:43:49 compute-0 sudo[237124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:49 compute-0 sudo[237045]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:49 compute-0 podman[237220]: 2026-01-21 23:43:49.600651573 +0000 UTC m=+0.048280110 container create 60ae21879017e2396eebb4be3e0dc611362967eaab71174ccd18cb693cd0f583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 23:43:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:49.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:49 compute-0 systemd[1]: Started libpod-conmon-60ae21879017e2396eebb4be3e0dc611362967eaab71174ccd18cb693cd0f583.scope.
Jan 21 23:43:49 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:43:49 compute-0 podman[237220]: 2026-01-21 23:43:49.579269203 +0000 UTC m=+0.026897770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:43:49 compute-0 podman[237220]: 2026-01-21 23:43:49.689101994 +0000 UTC m=+0.136730551 container init 60ae21879017e2396eebb4be3e0dc611362967eaab71174ccd18cb693cd0f583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_engelbart, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 23:43:49 compute-0 podman[237220]: 2026-01-21 23:43:49.697307665 +0000 UTC m=+0.144936242 container start 60ae21879017e2396eebb4be3e0dc611362967eaab71174ccd18cb693cd0f583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_engelbart, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 23:43:49 compute-0 podman[237220]: 2026-01-21 23:43:49.701211053 +0000 UTC m=+0.148839590 container attach 60ae21879017e2396eebb4be3e0dc611362967eaab71174ccd18cb693cd0f583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 23:43:49 compute-0 festive_engelbart[237277]: 167 167
Jan 21 23:43:49 compute-0 systemd[1]: libpod-60ae21879017e2396eebb4be3e0dc611362967eaab71174ccd18cb693cd0f583.scope: Deactivated successfully.
Jan 21 23:43:49 compute-0 podman[237311]: 2026-01-21 23:43:49.757615429 +0000 UTC m=+0.033942474 container died 60ae21879017e2396eebb4be3e0dc611362967eaab71174ccd18cb693cd0f583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_engelbart, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:43:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-0aeef7cd0b19c017bd9964db8ed3e68ed3583d88fcf0d55590bb940a0f9d95b6-merged.mount: Deactivated successfully.
Jan 21 23:43:49 compute-0 podman[237311]: 2026-01-21 23:43:49.796678297 +0000 UTC m=+0.073005322 container remove 60ae21879017e2396eebb4be3e0dc611362967eaab71174ccd18cb693cd0f583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_engelbart, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:43:49 compute-0 systemd[1]: libpod-conmon-60ae21879017e2396eebb4be3e0dc611362967eaab71174ccd18cb693cd0f583.scope: Deactivated successfully.
Jan 21 23:43:49 compute-0 sudo[237378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saraqtxfnoeitqsjsubxkstuedirmldy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039029.576689-1938-200042326906926/AnsiballZ_command.py'
Jan 21 23:43:49 compute-0 sudo[237378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:50 compute-0 podman[237386]: 2026-01-21 23:43:50.056059359 +0000 UTC m=+0.081532321 container create 6fc834508ebaf7f482956c023e0c2024edccb799796c0694e97898a9b8192658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 23:43:50 compute-0 python3.9[237380]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:43:50 compute-0 systemd[1]: Started libpod-conmon-6fc834508ebaf7f482956c023e0c2024edccb799796c0694e97898a9b8192658.scope.
Jan 21 23:43:50 compute-0 podman[237386]: 2026-01-21 23:43:50.025967744 +0000 UTC m=+0.051440756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:43:50 compute-0 sudo[237378]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:50 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aee773db20110ffaa0c03f0d66ef80750f911f9e85d1e3fb579b2bd90c7da76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aee773db20110ffaa0c03f0d66ef80750f911f9e85d1e3fb579b2bd90c7da76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aee773db20110ffaa0c03f0d66ef80750f911f9e85d1e3fb579b2bd90c7da76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aee773db20110ffaa0c03f0d66ef80750f911f9e85d1e3fb579b2bd90c7da76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aee773db20110ffaa0c03f0d66ef80750f911f9e85d1e3fb579b2bd90c7da76/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:43:50 compute-0 podman[237386]: 2026-01-21 23:43:50.165398856 +0000 UTC m=+0.190871858 container init 6fc834508ebaf7f482956c023e0c2024edccb799796c0694e97898a9b8192658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lumiere, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 23:43:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:50 compute-0 podman[237386]: 2026-01-21 23:43:50.1809796 +0000 UTC m=+0.206452562 container start 6fc834508ebaf7f482956c023e0c2024edccb799796c0694e97898a9b8192658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lumiere, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:43:50 compute-0 podman[237386]: 2026-01-21 23:43:50.18493978 +0000 UTC m=+0.210412742 container attach 6fc834508ebaf7f482956c023e0c2024edccb799796c0694e97898a9b8192658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 23:43:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:50.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:50 compute-0 sudo[237558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdssptyijfwirjerbocwxlkazqebosvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039030.2751572-1938-18915727120194/AnsiballZ_command.py'
Jan 21 23:43:50 compute-0 sudo[237558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:50 compute-0 python3.9[237560]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:43:50 compute-0 sudo[237558]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:50 compute-0 lucid_lumiere[237404]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:43:50 compute-0 lucid_lumiere[237404]: --> relative data size: 1.0
Jan 21 23:43:50 compute-0 lucid_lumiere[237404]: --> All data devices are unavailable
Jan 21 23:43:51 compute-0 systemd[1]: libpod-6fc834508ebaf7f482956c023e0c2024edccb799796c0694e97898a9b8192658.scope: Deactivated successfully.
Jan 21 23:43:51 compute-0 podman[237386]: 2026-01-21 23:43:51.010311493 +0000 UTC m=+1.035784435 container died 6fc834508ebaf7f482956c023e0c2024edccb799796c0694e97898a9b8192658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lumiere, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 23:43:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8aee773db20110ffaa0c03f0d66ef80750f911f9e85d1e3fb579b2bd90c7da76-merged.mount: Deactivated successfully.
Jan 21 23:43:51 compute-0 podman[237386]: 2026-01-21 23:43:51.068310078 +0000 UTC m=+1.093783010 container remove 6fc834508ebaf7f482956c023e0c2024edccb799796c0694e97898a9b8192658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:43:51 compute-0 systemd[1]: libpod-conmon-6fc834508ebaf7f482956c023e0c2024edccb799796c0694e97898a9b8192658.scope: Deactivated successfully.
Jan 21 23:43:51 compute-0 sudo[237124]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:51 compute-0 sudo[237710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:43:51 compute-0 sudo[237758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slixhfflmklfgkgnrwtbublutwlbkvgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039030.895484-1938-151555953033925/AnsiballZ_command.py'
Jan 21 23:43:51 compute-0 sudo[237710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:51 compute-0 sudo[237758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:51 compute-0 sudo[237710]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:51 compute-0 ceph-mon[74318]: pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:51 compute-0 sudo[237763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:43:51 compute-0 sudo[237763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:51 compute-0 sudo[237763]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:51 compute-0 sudo[237788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:43:51 compute-0 sudo[237788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:51 compute-0 sudo[237788]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:51 compute-0 python3.9[237762]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:43:51 compute-0 sudo[237813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:43:51 compute-0 sudo[237813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:51 compute-0 sudo[237758]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:51.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:51 compute-0 podman[237976]: 2026-01-21 23:43:51.732610099 +0000 UTC m=+0.053683104 container create 8f7aa2dad4f941c17582b39ba7c481b2ea7c2d621985a7ad86c32ca76d7185f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:43:51 compute-0 systemd[1]: Started libpod-conmon-8f7aa2dad4f941c17582b39ba7c481b2ea7c2d621985a7ad86c32ca76d7185f3.scope.
Jan 21 23:43:51 compute-0 podman[237976]: 2026-01-21 23:43:51.705801753 +0000 UTC m=+0.026874768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:43:51 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:43:51 compute-0 podman[237976]: 2026-01-21 23:43:51.823260587 +0000 UTC m=+0.144333572 container init 8f7aa2dad4f941c17582b39ba7c481b2ea7c2d621985a7ad86c32ca76d7185f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhaskara, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 21 23:43:51 compute-0 podman[237976]: 2026-01-21 23:43:51.829948591 +0000 UTC m=+0.151021566 container start 8f7aa2dad4f941c17582b39ba7c481b2ea7c2d621985a7ad86c32ca76d7185f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 21 23:43:51 compute-0 podman[237976]: 2026-01-21 23:43:51.833412166 +0000 UTC m=+0.154485161 container attach 8f7aa2dad4f941c17582b39ba7c481b2ea7c2d621985a7ad86c32ca76d7185f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhaskara, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:43:51 compute-0 magical_bhaskara[238018]: 167 167
Jan 21 23:43:51 compute-0 systemd[1]: libpod-8f7aa2dad4f941c17582b39ba7c481b2ea7c2d621985a7ad86c32ca76d7185f3.scope: Deactivated successfully.
Jan 21 23:43:51 compute-0 podman[237976]: 2026-01-21 23:43:51.83549736 +0000 UTC m=+0.156570315 container died 8f7aa2dad4f941c17582b39ba7c481b2ea7c2d621985a7ad86c32ca76d7185f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 23:43:51 compute-0 sudo[238048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrdqffxbnwufldxitmqeuohxwzhvdfcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039031.5466697-1938-3817732059729/AnsiballZ_command.py'
Jan 21 23:43:51 compute-0 sudo[238048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4cf4f2971e755595277deb7ac259854f6d3b5636338b106fdc8fa57770caedc-merged.mount: Deactivated successfully.
Jan 21 23:43:51 compute-0 podman[237976]: 2026-01-21 23:43:51.877673033 +0000 UTC m=+0.198746038 container remove 8f7aa2dad4f941c17582b39ba7c481b2ea7c2d621985a7ad86c32ca76d7185f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhaskara, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:43:51 compute-0 systemd[1]: libpod-conmon-8f7aa2dad4f941c17582b39ba7c481b2ea7c2d621985a7ad86c32ca76d7185f3.scope: Deactivated successfully.
Jan 21 23:43:52 compute-0 python3.9[238053]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:43:52 compute-0 podman[238070]: 2026-01-21 23:43:52.043316023 +0000 UTC m=+0.049423296 container create 6881bc6ca51b806684919266ec82e5aafe79d74b525a97d80b001cdc61af34d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wing, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:43:52 compute-0 sudo[238048]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:52 compute-0 systemd[1]: Started libpod-conmon-6881bc6ca51b806684919266ec82e5aafe79d74b525a97d80b001cdc61af34d8.scope.
Jan 21 23:43:52 compute-0 podman[238070]: 2026-01-21 23:43:52.021841139 +0000 UTC m=+0.027948462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:43:52 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5686ca3727f6d9e37e64fde798f8d3f32d6de1746484a68398fd8e0e84638e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5686ca3727f6d9e37e64fde798f8d3f32d6de1746484a68398fd8e0e84638e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5686ca3727f6d9e37e64fde798f8d3f32d6de1746484a68398fd8e0e84638e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5686ca3727f6d9e37e64fde798f8d3f32d6de1746484a68398fd8e0e84638e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:43:52 compute-0 podman[238070]: 2026-01-21 23:43:52.153959909 +0000 UTC m=+0.160067202 container init 6881bc6ca51b806684919266ec82e5aafe79d74b525a97d80b001cdc61af34d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 21 23:43:52 compute-0 podman[238070]: 2026-01-21 23:43:52.162710125 +0000 UTC m=+0.168817418 container start 6881bc6ca51b806684919266ec82e5aafe79d74b525a97d80b001cdc61af34d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 21 23:43:52 compute-0 podman[238070]: 2026-01-21 23:43:52.167283784 +0000 UTC m=+0.173391057 container attach 6881bc6ca51b806684919266ec82e5aafe79d74b525a97d80b001cdc61af34d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wing, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 23:43:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:43:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:52.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:43:52 compute-0 sudo[238241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mveputclglwygmwnbweaxzavbleoknnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039032.2080543-1938-272369067783257/AnsiballZ_command.py'
Jan 21 23:43:52 compute-0 sudo[238241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:43:52 compute-0 python3.9[238243]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:43:52 compute-0 sudo[238241]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:52 compute-0 trusting_wing[238088]: {
Jan 21 23:43:52 compute-0 trusting_wing[238088]:     "1": [
Jan 21 23:43:52 compute-0 trusting_wing[238088]:         {
Jan 21 23:43:52 compute-0 trusting_wing[238088]:             "devices": [
Jan 21 23:43:52 compute-0 trusting_wing[238088]:                 "/dev/loop3"
Jan 21 23:43:52 compute-0 trusting_wing[238088]:             ],
Jan 21 23:43:52 compute-0 trusting_wing[238088]:             "lv_name": "ceph_lv0",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:             "lv_size": "7511998464",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:             "name": "ceph_lv0",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:             "tags": {
Jan 21 23:43:52 compute-0 trusting_wing[238088]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:                 "ceph.cluster_name": "ceph",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:                 "ceph.crush_device_class": "",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:                 "ceph.encrypted": "0",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:                 "ceph.osd_id": "1",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:                 "ceph.type": "block",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:                 "ceph.vdo": "0"
Jan 21 23:43:52 compute-0 trusting_wing[238088]:             },
Jan 21 23:43:52 compute-0 trusting_wing[238088]:             "type": "block",
Jan 21 23:43:52 compute-0 trusting_wing[238088]:             "vg_name": "ceph_vg0"
Jan 21 23:43:52 compute-0 trusting_wing[238088]:         }
Jan 21 23:43:52 compute-0 trusting_wing[238088]:     ]
Jan 21 23:43:52 compute-0 trusting_wing[238088]: }
Jan 21 23:43:52 compute-0 systemd[1]: libpod-6881bc6ca51b806684919266ec82e5aafe79d74b525a97d80b001cdc61af34d8.scope: Deactivated successfully.
Jan 21 23:43:52 compute-0 podman[238070]: 2026-01-21 23:43:52.898282825 +0000 UTC m=+0.904390138 container died 6881bc6ca51b806684919266ec82e5aafe79d74b525a97d80b001cdc61af34d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff5686ca3727f6d9e37e64fde798f8d3f32d6de1746484a68398fd8e0e84638e-merged.mount: Deactivated successfully.
Jan 21 23:43:52 compute-0 podman[238070]: 2026-01-21 23:43:52.952153634 +0000 UTC m=+0.958260907 container remove 6881bc6ca51b806684919266ec82e5aafe79d74b525a97d80b001cdc61af34d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wing, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 23:43:52 compute-0 systemd[1]: libpod-conmon-6881bc6ca51b806684919266ec82e5aafe79d74b525a97d80b001cdc61af34d8.scope: Deactivated successfully.
Jan 21 23:43:52 compute-0 sudo[237813]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:53 compute-0 sudo[238361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:43:53 compute-0 sudo[238361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:53 compute-0 sudo[238361]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:53 compute-0 sudo[238406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:43:53 compute-0 sudo[238406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:53 compute-0 sudo[238406]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:53 compute-0 sudo[238495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awvjdnmipuugqdkhyfrzqgwjnixsgxqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039032.882678-1938-668659713453/AnsiballZ_command.py'
Jan 21 23:43:53 compute-0 sudo[238442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:43:53 compute-0 sudo[238495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:53 compute-0 sudo[238442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:53 compute-0 sudo[238442]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:53 compute-0 podman[238431]: 2026-01-21 23:43:53.187826405 +0000 UTC m=+0.066047861 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 21 23:43:53 compute-0 sudo[238506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:43:53 compute-0 sudo[238506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:53 compute-0 ceph-mon[74318]: pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:53 compute-0 python3.9[238504]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 23:43:53 compute-0 sudo[238495]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:53.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:53 compute-0 podman[238596]: 2026-01-21 23:43:53.631837384 +0000 UTC m=+0.046385862 container create 9cf1a40aa6f1668774bd5cff5516c7bd35424ddf008b2c497fc614b02c92562d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lovelace, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 21 23:43:53 compute-0 systemd[1]: Started libpod-conmon-9cf1a40aa6f1668774bd5cff5516c7bd35424ddf008b2c497fc614b02c92562d.scope.
Jan 21 23:43:53 compute-0 podman[238596]: 2026-01-21 23:43:53.609242736 +0000 UTC m=+0.023791274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:43:53 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:43:53 compute-0 podman[238596]: 2026-01-21 23:43:53.724054839 +0000 UTC m=+0.138603377 container init 9cf1a40aa6f1668774bd5cff5516c7bd35424ddf008b2c497fc614b02c92562d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:43:53 compute-0 podman[238596]: 2026-01-21 23:43:53.731078584 +0000 UTC m=+0.145627042 container start 9cf1a40aa6f1668774bd5cff5516c7bd35424ddf008b2c497fc614b02c92562d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:43:53 compute-0 podman[238596]: 2026-01-21 23:43:53.734672642 +0000 UTC m=+0.149221180 container attach 9cf1a40aa6f1668774bd5cff5516c7bd35424ddf008b2c497fc614b02c92562d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lovelace, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:43:53 compute-0 gallant_lovelace[238612]: 167 167
Jan 21 23:43:53 compute-0 systemd[1]: libpod-9cf1a40aa6f1668774bd5cff5516c7bd35424ddf008b2c497fc614b02c92562d.scope: Deactivated successfully.
Jan 21 23:43:53 compute-0 podman[238596]: 2026-01-21 23:43:53.739012015 +0000 UTC m=+0.153560513 container died 9cf1a40aa6f1668774bd5cff5516c7bd35424ddf008b2c497fc614b02c92562d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-48325bd1c799224790884c0b362b647c1516908b92c6ecfb4a21ba0cfb257112-merged.mount: Deactivated successfully.
Jan 21 23:43:53 compute-0 podman[238596]: 2026-01-21 23:43:53.789581104 +0000 UTC m=+0.204129592 container remove 9cf1a40aa6f1668774bd5cff5516c7bd35424ddf008b2c497fc614b02c92562d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lovelace, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 23:43:53 compute-0 systemd[1]: libpod-conmon-9cf1a40aa6f1668774bd5cff5516c7bd35424ddf008b2c497fc614b02c92562d.scope: Deactivated successfully.
Jan 21 23:43:53 compute-0 podman[238637]: 2026-01-21 23:43:53.987834715 +0000 UTC m=+0.057032586 container create e5ac17276145b4f52193ba104bfb61fa01e49de6043e0956f5d83ba4687dfb0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mendeleev, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 23:43:54 compute-0 systemd[1]: Started libpod-conmon-e5ac17276145b4f52193ba104bfb61fa01e49de6043e0956f5d83ba4687dfb0d.scope.
Jan 21 23:43:54 compute-0 podman[238637]: 2026-01-21 23:43:53.957905215 +0000 UTC m=+0.027103126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:43:54 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6ed63a86a0ba8e8c01a1ba913c5277f9f2a2661e8e3d901b1255fe4a90a2aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6ed63a86a0ba8e8c01a1ba913c5277f9f2a2661e8e3d901b1255fe4a90a2aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6ed63a86a0ba8e8c01a1ba913c5277f9f2a2661e8e3d901b1255fe4a90a2aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6ed63a86a0ba8e8c01a1ba913c5277f9f2a2661e8e3d901b1255fe4a90a2aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:43:54 compute-0 podman[238637]: 2026-01-21 23:43:54.100369589 +0000 UTC m=+0.169567480 container init e5ac17276145b4f52193ba104bfb61fa01e49de6043e0956f5d83ba4687dfb0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mendeleev, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:43:54 compute-0 podman[238637]: 2026-01-21 23:43:54.112147468 +0000 UTC m=+0.181345309 container start e5ac17276145b4f52193ba104bfb61fa01e49de6043e0956f5d83ba4687dfb0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mendeleev, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:43:54 compute-0 podman[238637]: 2026-01-21 23:43:54.116071887 +0000 UTC m=+0.185269808 container attach e5ac17276145b4f52193ba104bfb61fa01e49de6043e0956f5d83ba4687dfb0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mendeleev, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:43:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:54.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:55 compute-0 objective_mendeleev[238653]: {
Jan 21 23:43:55 compute-0 objective_mendeleev[238653]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:43:55 compute-0 objective_mendeleev[238653]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:43:55 compute-0 objective_mendeleev[238653]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:43:55 compute-0 objective_mendeleev[238653]:         "osd_id": 1,
Jan 21 23:43:55 compute-0 objective_mendeleev[238653]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:43:55 compute-0 objective_mendeleev[238653]:         "type": "bluestore"
Jan 21 23:43:55 compute-0 objective_mendeleev[238653]:     }
Jan 21 23:43:55 compute-0 objective_mendeleev[238653]: }
Jan 21 23:43:55 compute-0 systemd[1]: libpod-e5ac17276145b4f52193ba104bfb61fa01e49de6043e0956f5d83ba4687dfb0d.scope: Deactivated successfully.
Jan 21 23:43:55 compute-0 podman[238637]: 2026-01-21 23:43:55.070130384 +0000 UTC m=+1.139328255 container died e5ac17276145b4f52193ba104bfb61fa01e49de6043e0956f5d83ba4687dfb0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mendeleev, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 23:43:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-af6ed63a86a0ba8e8c01a1ba913c5277f9f2a2661e8e3d901b1255fe4a90a2aa-merged.mount: Deactivated successfully.
Jan 21 23:43:55 compute-0 podman[238637]: 2026-01-21 23:43:55.139758264 +0000 UTC m=+1.208956095 container remove e5ac17276145b4f52193ba104bfb61fa01e49de6043e0956f5d83ba4687dfb0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mendeleev, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:43:55 compute-0 systemd[1]: libpod-conmon-e5ac17276145b4f52193ba104bfb61fa01e49de6043e0956f5d83ba4687dfb0d.scope: Deactivated successfully.
Jan 21 23:43:55 compute-0 sudo[238506]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:43:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:43:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:43:55 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:43:55 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev f5f0b909-b392-46c7-88c7-d64e4765df05 does not exist
Jan 21 23:43:55 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 3d0900af-f6ad-4191-a7f5-ebd7fdcae28d does not exist
Jan 21 23:43:55 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 6b6dd567-c41f-4b21-a482-cc0b0ff5762b does not exist
Jan 21 23:43:55 compute-0 sudo[238687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:43:55 compute-0 sudo[238687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:55 compute-0 sudo[238687]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:55 compute-0 ceph-mon[74318]: pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:55 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:43:55 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:43:55 compute-0 sudo[238712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:43:55 compute-0 sudo[238712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:43:55 compute-0 sudo[238712]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:55.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:56.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:56 compute-0 sudo[238863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qojvklfybijqseupphtkaedxonqcfiqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039035.9526722-2145-235845238912527/AnsiballZ_file.py'
Jan 21 23:43:56 compute-0 sudo[238863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:56 compute-0 python3.9[238865]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:43:56 compute-0 sudo[238863]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:57 compute-0 sudo[239015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqltghlfecaypzfecjbqsdfehmfoulpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039036.7521043-2145-183255582899698/AnsiballZ_file.py'
Jan 21 23:43:57 compute-0 sudo[239015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:57 compute-0 python3.9[239017]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:43:57 compute-0 ceph-mon[74318]: pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:57 compute-0 sudo[239015]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:43:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:57.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:57 compute-0 sudo[239168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubqudjkiiyvprgbzvbpmfhcosxxhotwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039037.4612508-2145-238047907566534/AnsiballZ_file.py'
Jan 21 23:43:57 compute-0 sudo[239168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:58 compute-0 python3.9[239170]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:43:58 compute-0 sudo[239168]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:43:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:43:58.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:43:58 compute-0 sudo[239320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcszpccnwcjrjtgruxprasrxqjxqvgzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039038.3717577-2211-276624473103959/AnsiballZ_file.py'
Jan 21 23:43:58 compute-0 sudo[239320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:58 compute-0 python3.9[239322]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:43:58 compute-0 sudo[239320]: pam_unix(sudo:session): session closed for user root
Jan 21 23:43:59 compute-0 ceph-mon[74318]: pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:43:59 compute-0 sudo[239473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbwtfdigznktowrpbhfbncumyiqkmenm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039039.0883722-2211-76259848741724/AnsiballZ_file.py'
Jan 21 23:43:59 compute-0 sudo[239473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:43:59 compute-0 python3.9[239475]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:43:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:43:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:43:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:43:59.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:43:59 compute-0 sudo[239473]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:44:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:00.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:44:00 compute-0 sudo[239625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llysjxjfocjajghtrxtjfmgvyuxvbhpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039039.8064065-2211-244588935339980/AnsiballZ_file.py'
Jan 21 23:44:00 compute-0 sudo[239625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:00 compute-0 python3.9[239627]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:44:00 compute-0 sudo[239625]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:01 compute-0 sudo[239777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsnmwotlguxbjdrhsgbjojpwksmlfmhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039040.704479-2211-172148453987982/AnsiballZ_file.py'
Jan 21 23:44:01 compute-0 sudo[239777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:01 compute-0 python3.9[239779]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:44:01 compute-0 sudo[239777]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:01 compute-0 ceph-mon[74318]: pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:01 compute-0 anacron[30932]: Job `cron.weekly' started
Jan 21 23:44:01 compute-0 anacron[30932]: Job `cron.weekly' terminated
Jan 21 23:44:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:44:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:01.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:44:01 compute-0 sudo[239932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwdwzogkadruelbavrhimgyamstaibag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039041.3880181-2211-63583046030682/AnsiballZ_file.py'
Jan 21 23:44:01 compute-0 sudo[239932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:01 compute-0 python3.9[239934]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:44:01 compute-0 sudo[239932]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:02 compute-0 sudo[239935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:44:02 compute-0 sudo[239935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:02 compute-0 sudo[239935]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:02 compute-0 sudo[239968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:44:02 compute-0 sudo[239968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:02 compute-0 sudo[239968]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:02.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:02 compute-0 sudo[240134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzbjkzncipoywzzxegavhqtbnrseyfuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039042.1549926-2211-77605311939329/AnsiballZ_file.py'
Jan 21 23:44:02 compute-0 sudo[240134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:44:02 compute-0 python3.9[240136]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:44:02 compute-0 sudo[240134]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:03 compute-0 sudo[240286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlclwhxiklmiadwkgbmeqoujkwovijmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039042.8852131-2211-25119150204043/AnsiballZ_file.py'
Jan 21 23:44:03 compute-0 sudo[240286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:03 compute-0 ceph-mon[74318]: pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:03 compute-0 python3.9[240288]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:44:03 compute-0 sudo[240286]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:03.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:04.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:05 compute-0 ceph-mon[74318]: pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:44:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:05.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:44:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:06.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:07 compute-0 ceph-mon[74318]: pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:44:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:44:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:07.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:44:08 compute-0 podman[240316]: 2026-01-21 23:44:08.012907415 +0000 UTC m=+0.122401645 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Jan 21 23:44:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:08.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:44:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:44:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:44:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:44:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:44:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:44:09 compute-0 ceph-mon[74318]: pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:09 compute-0 sudo[240468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwjkbtewpzglwktuietawobacgckvsjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039048.8841708-2536-39150461163890/AnsiballZ_getent.py'
Jan 21 23:44:09 compute-0 sudo[240468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:44:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:09.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:44:09 compute-0 python3.9[240470]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 21 23:44:09 compute-0 sudo[240468]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:10.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:10 compute-0 sudo[240621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdjgvxmhwuzbfznfbeweltijjovqejie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039049.9250329-2560-82812246194720/AnsiballZ_group.py'
Jan 21 23:44:10 compute-0 sudo[240621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:10 compute-0 python3.9[240623]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 23:44:10 compute-0 groupadd[240624]: group added to /etc/group: name=nova, GID=42436
Jan 21 23:44:10 compute-0 groupadd[240624]: group added to /etc/gshadow: name=nova
Jan 21 23:44:10 compute-0 groupadd[240624]: new group: name=nova, GID=42436
Jan 21 23:44:10 compute-0 sudo[240621]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:11 compute-0 ceph-mon[74318]: pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:11 compute-0 sudo[240780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jijgdkgmbhilacxxiibfomzzltxyxikr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039050.9835553-2584-134224976341023/AnsiballZ_user.py'
Jan 21 23:44:11 compute-0 sudo[240780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:44:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:11.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:44:11 compute-0 python3.9[240782]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 21 23:44:11 compute-0 useradd[240784]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 21 23:44:11 compute-0 useradd[240784]: add 'nova' to group 'libvirt'
Jan 21 23:44:11 compute-0 useradd[240784]: add 'nova' to shadow group 'libvirt'
Jan 21 23:44:11 compute-0 sudo[240780]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:12.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:44:12 compute-0 sshd-session[240815]: Accepted publickey for zuul from 192.168.122.30 port 49346 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 21 23:44:12 compute-0 systemd-logind[786]: New session 51 of user zuul.
Jan 21 23:44:12 compute-0 systemd[1]: Started Session 51 of User zuul.
Jan 21 23:44:12 compute-0 sshd-session[240815]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 23:44:13 compute-0 sshd-session[240818]: Received disconnect from 192.168.122.30 port 49346:11: disconnected by user
Jan 21 23:44:13 compute-0 sshd-session[240818]: Disconnected from user zuul 192.168.122.30 port 49346
Jan 21 23:44:13 compute-0 sshd-session[240815]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:44:13 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Jan 21 23:44:13 compute-0 systemd-logind[786]: Session 51 logged out. Waiting for processes to exit.
Jan 21 23:44:13 compute-0 systemd-logind[786]: Removed session 51.
Jan 21 23:44:13 compute-0 ceph-mon[74318]: pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:13.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:13 compute-0 python3.9[240969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:44:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:44:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:14.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:44:14 compute-0 ceph-mon[74318]: pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:14 compute-0 python3.9[241090]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769039053.2926724-2659-52962392129422/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:44:15 compute-0 python3.9[241240]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:44:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:15.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:15 compute-0 python3.9[241317]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:44:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:16.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:16 compute-0 python3.9[241467]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:44:17 compute-0 python3.9[241588]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769039055.9223592-2659-112098941065578/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:44:17 compute-0 ceph-mon[74318]: pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:44:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:17.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:17 compute-0 python3.9[241739]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:44:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:44:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:18.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:44:18 compute-0 python3.9[241860]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769039057.3055177-2659-178793532970607/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:44:19 compute-0 python3.9[242010]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:44:19 compute-0 ceph-mon[74318]: pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:19.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:19 compute-0 python3.9[242132]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769039058.6166978-2659-241131437895428/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:44:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:44:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:20.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:44:20 compute-0 python3.9[242282]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:44:21 compute-0 python3.9[242403]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769039059.845453-2659-36714809098214/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:44:21 compute-0 ceph-mon[74318]: pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:44:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:21.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:44:21 compute-0 sudo[242554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfhgraayhbkxuyipcosmwmqzxffuexcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039061.4542003-2908-167515453915037/AnsiballZ_file.py'
Jan 21 23:44:21 compute-0 sudo[242554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:22 compute-0 python3.9[242556]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:44:22 compute-0 sudo[242554]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:22 compute-0 sudo[242581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:44:22 compute-0 sudo[242581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:22 compute-0 sudo[242581]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:44:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:22.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:44:22 compute-0 sudo[242611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:44:22 compute-0 sudo[242611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:22 compute-0 sudo[242611]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:44:22 compute-0 sudo[242756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xohhdevsnzqusddnexxxwnsoetodxwrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039062.288743-2932-112117503350331/AnsiballZ_copy.py'
Jan 21 23:44:22 compute-0 sudo[242756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:22 compute-0 python3.9[242758]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:44:22 compute-0 sudo[242756]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:23 compute-0 ceph-mon[74318]: pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:23 compute-0 sudo[242920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hikhqkwpzcferdnuarbhttvomvelvmgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039063.0649436-2956-68785124887249/AnsiballZ_stat.py'
Jan 21 23:44:23 compute-0 sudo[242920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:23 compute-0 podman[242882]: 2026-01-21 23:44:23.430830004 +0000 UTC m=+0.080970105 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 23:44:23 compute-0 python3.9[242924]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:44:23 compute-0 sudo[242920]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:23.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:24 compute-0 sudo[243078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzperflgtzqjrifociuzhkalyjnnssmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039063.8476684-2980-246734289241049/AnsiballZ_stat.py'
Jan 21 23:44:24 compute-0 sudo[243078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:24.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:24 compute-0 python3.9[243080]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:44:24 compute-0 sudo[243078]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:24 compute-0 sudo[243201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtbbnpdgqahnkvkwmlrddbtgwzvxbrlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039063.8476684-2980-246734289241049/AnsiballZ_copy.py'
Jan 21 23:44:24 compute-0 sudo[243201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:25 compute-0 python3.9[243203]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769039063.8476684-2980-246734289241049/.source _original_basename=.hb5gvdp7 follow=False checksum=e3d1092a056dc3d5df04a92f9a3ec0874b526b66 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 21 23:44:25 compute-0 sudo[243201]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:25 compute-0 ceph-mon[74318]: pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:25.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:26.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:26 compute-0 python3.9[243356]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:44:27 compute-0 python3.9[243508]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:44:27 compute-0 ceph-mon[74318]: pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:44:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:27.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:27 compute-0 python3.9[243630]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769039066.5365202-3058-59343555861726/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:44:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:28.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:28 compute-0 python3.9[243780]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 23:44:29 compute-0 ceph-mon[74318]: pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:29 compute-0 python3.9[243901]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769039068.175477-3103-189744910543261/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 23:44:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:29.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:44:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:30.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:44:30 compute-0 sudo[244052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moyjqerlqxogyawjouytvlumsjumzobg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039070.0168512-3154-193943961938555/AnsiballZ_container_config_data.py'
Jan 21 23:44:30 compute-0 sudo[244052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:30 compute-0 python3.9[244054]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 21 23:44:30 compute-0 sudo[244052]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:31 compute-0 ceph-mon[74318]: pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:31.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:31 compute-0 sudo[244205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvgzvxgsxpijsooahqvssjznqsuylwks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039071.2925687-3187-121539245274505/AnsiballZ_container_config_hash.py'
Jan 21 23:44:31 compute-0 sudo[244205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:32 compute-0 python3.9[244207]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 21 23:44:32 compute-0 sudo[244205]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:32.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:44:33 compute-0 sudo[244357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akhwoxzzfvxtpmcqhkhvoredyagdwmyg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769039072.535788-3217-59583336537754/AnsiballZ_edpm_container_manage.py'
Jan 21 23:44:33 compute-0 sudo[244357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:33 compute-0 ceph-mon[74318]: pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:33 compute-0 python3[244359]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 21 23:44:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 21 23:44:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:33.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 21 23:44:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:44:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:34.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:44:35 compute-0 ceph-mon[74318]: pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:35.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:36.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:36 compute-0 ceph-mon[74318]: pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:44:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:37.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:38.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:38 compute-0 ceph-mon[74318]: pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:44:39
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.meta', 'images', '.mgr']
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:44:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:44:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:39.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:40.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:41.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:42.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:42 compute-0 sudo[244450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:44:42 compute-0 sudo[244450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:42 compute-0 sudo[244450]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:42 compute-0 sudo[244475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:44:42 compute-0 sudo[244475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:42 compute-0 sudo[244475]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:44:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:43.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:44 compute-0 ceph-mon[74318]: pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:44.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:44 compute-0 podman[244373]: 2026-01-21 23:44:44.819344466 +0000 UTC m=+11.265794585 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 21 23:44:44 compute-0 podman[244420]: 2026-01-21 23:44:44.859581076 +0000 UTC m=+5.971727826 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Jan 21 23:44:44 compute-0 podman[244540]: 2026-01-21 23:44:44.989863211 +0000 UTC m=+0.059850333 container create fd388b2766020b9672df327e62e305d4d28a4e50e6a36d7cc455c2912573862a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 21 23:44:44 compute-0 podman[244540]: 2026-01-21 23:44:44.953795553 +0000 UTC m=+0.023782725 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 21 23:44:44 compute-0 python3[244359]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 21 23:44:45 compute-0 ceph-mon[74318]: pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:45 compute-0 ceph-mon[74318]: pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:45 compute-0 sudo[244357]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:44:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:45.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:44:45 compute-0 sudo[244729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydhkmotvbpzftwalxwxhczqyvavabhek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039085.4246852-3241-215963924903727/AnsiballZ_stat.py'
Jan 21 23:44:45 compute-0 sudo[244729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:45 compute-0 python3.9[244731]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:44:45 compute-0 sudo[244729]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:46.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:46 compute-0 ceph-mon[74318]: pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:47 compute-0 sudo[244883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlymybfsnfzlfhlpjlyznaubnbetgspb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039086.7608848-3277-77699303767115/AnsiballZ_container_config_data.py'
Jan 21 23:44:47 compute-0 sudo[244883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:47 compute-0 python3.9[244885]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 21 23:44:47 compute-0 sudo[244883]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:44:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:44:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:47.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:44:48 compute-0 sudo[245036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yronfnditjbibiliavfoppoitccexyvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039087.8371742-3310-44765004385707/AnsiballZ_container_config_hash.py'
Jan 21 23:44:48 compute-0 sudo[245036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:48.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:48 compute-0 python3.9[245038]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 21 23:44:48 compute-0 sudo[245036]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:44:48.739 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:44:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:44:48.743 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:44:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:44:48.743 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:44:49 compute-0 sudo[245188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqabtzyzqajlqzvhnjrddacursyidiax ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769039088.9028306-3340-146458475688980/AnsiballZ_edpm_container_manage.py'
Jan 21 23:44:49 compute-0 sudo[245188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:49.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:49 compute-0 python3[245190]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 21 23:44:50 compute-0 podman[245228]: 2026-01-21 23:44:50.018435864 +0000 UTC m=+0.069132074 container create 378445ba08333a0c0d2c90f4ba06a7f2a5bf640c06aa080a3598225440daaa3e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 21 23:44:50 compute-0 podman[245228]: 2026-01-21 23:44:49.983599643 +0000 UTC m=+0.034295813 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 21 23:44:50 compute-0 python3[245190]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 21 23:44:50 compute-0 ceph-mon[74318]: pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:50 compute-0 sudo[245188]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:50.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:50 compute-0 sudo[245416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dehkxwisxaupwfdwezeqzjvrukrepbgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039090.5061738-3364-99086012983584/AnsiballZ_stat.py'
Jan 21 23:44:50 compute-0 sudo[245416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:51 compute-0 python3.9[245418]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:44:51 compute-0 sudo[245416]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:51 compute-0 ceph-mon[74318]: pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:51 compute-0 sudo[245571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smtkjulmdwclbhctlppasgvzwdocohhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039091.3819282-3391-79580920615553/AnsiballZ_file.py'
Jan 21 23:44:51 compute-0 sudo[245571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:51.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:51 compute-0 python3.9[245573]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:44:51 compute-0 sudo[245571]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:52.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:52 compute-0 sudo[245722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upkjvllvuqtfjfrlwhpowgzxvsmlydwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039091.9667933-3391-103778852616432/AnsiballZ_copy.py'
Jan 21 23:44:52 compute-0 sudo[245722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:44:52 compute-0 python3.9[245724]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769039091.9667933-3391-103778852616432/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 23:44:52 compute-0 sudo[245722]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:52 compute-0 sudo[245798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfqpqgjyjbageupzsxejjyixfitsmgbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039091.9667933-3391-103778852616432/AnsiballZ_systemd.py'
Jan 21 23:44:52 compute-0 sudo[245798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:53 compute-0 python3.9[245800]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 23:44:53 compute-0 systemd[1]: Reloading.
Jan 21 23:44:53 compute-0 systemd-rc-local-generator[245830]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:44:53 compute-0 systemd-sysv-generator[245833]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:44:53 compute-0 ceph-mon[74318]: pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:53 compute-0 sudo[245798]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:53 compute-0 podman[245839]: 2026-01-21 23:44:53.592645781 +0000 UTC m=+0.057550981 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 23:44:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:53.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:53 compute-0 sudo[245931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amyywbdkgpqxtqubjkaygqaioememdro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039091.9667933-3391-103778852616432/AnsiballZ_systemd.py'
Jan 21 23:44:53 compute-0 sudo[245931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:54 compute-0 python3.9[245933]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:44:54 compute-0 systemd[1]: Reloading.
Jan 21 23:44:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:54.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:54 compute-0 systemd-sysv-generator[245967]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 23:44:54 compute-0 systemd-rc-local-generator[245961]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 23:44:54 compute-0 systemd[1]: Starting nova_compute container...
Jan 21 23:44:54 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff850024672c7c6bad188b038e310b567ac41c6db0d54bf9c898ddfda0c08a36/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 21 23:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff850024672c7c6bad188b038e310b567ac41c6db0d54bf9c898ddfda0c08a36/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 21 23:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff850024672c7c6bad188b038e310b567ac41c6db0d54bf9c898ddfda0c08a36/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 21 23:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff850024672c7c6bad188b038e310b567ac41c6db0d54bf9c898ddfda0c08a36/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 21 23:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff850024672c7c6bad188b038e310b567ac41c6db0d54bf9c898ddfda0c08a36/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 21 23:44:54 compute-0 podman[245973]: 2026-01-21 23:44:54.655604769 +0000 UTC m=+0.133059375 container init 378445ba08333a0c0d2c90f4ba06a7f2a5bf640c06aa080a3598225440daaa3e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Jan 21 23:44:54 compute-0 podman[245973]: 2026-01-21 23:44:54.669911166 +0000 UTC m=+0.147365722 container start 378445ba08333a0c0d2c90f4ba06a7f2a5bf640c06aa080a3598225440daaa3e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible)
Jan 21 23:44:54 compute-0 podman[245973]: nova_compute
Jan 21 23:44:54 compute-0 nova_compute[245988]: + sudo -E kolla_set_configs
Jan 21 23:44:54 compute-0 systemd[1]: Started nova_compute container.
Jan 21 23:44:54 compute-0 sudo[245931]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Validating config file
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Copying service configuration files
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Deleting /etc/ceph
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Creating directory /etc/ceph
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Setting permission for /etc/ceph
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Writing out command to execute
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 21 23:44:54 compute-0 nova_compute[245988]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 21 23:44:54 compute-0 nova_compute[245988]: ++ cat /run_command
Jan 21 23:44:54 compute-0 nova_compute[245988]: + CMD=nova-compute
Jan 21 23:44:54 compute-0 nova_compute[245988]: + ARGS=
Jan 21 23:44:54 compute-0 nova_compute[245988]: + sudo kolla_copy_cacerts
Jan 21 23:44:54 compute-0 nova_compute[245988]: + [[ ! -n '' ]]
Jan 21 23:44:54 compute-0 nova_compute[245988]: + . kolla_extend_start
Jan 21 23:44:54 compute-0 nova_compute[245988]: + echo 'Running command: '\''nova-compute'\'''
Jan 21 23:44:54 compute-0 nova_compute[245988]: Running command: 'nova-compute'
Jan 21 23:44:54 compute-0 nova_compute[245988]: + umask 0022
Jan 21 23:44:54 compute-0 nova_compute[245988]: + exec nova-compute
Jan 21 23:44:55 compute-0 ceph-mon[74318]: pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:55.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:55 compute-0 sudo[246056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:44:55 compute-0 sudo[246056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:55 compute-0 sudo[246056]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:55 compute-0 sudo[246104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:44:55 compute-0 sudo[246104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:55 compute-0 sudo[246104]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:55 compute-0 sudo[246151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:44:55 compute-0 sudo[246151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:55 compute-0 sudo[246151]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:55 compute-0 sudo[246199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 21 23:44:55 compute-0 sudo[246199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:56 compute-0 python3.9[246251]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:44:56 compute-0 sudo[246199]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:44:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:56.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:44:56 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:44:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:44:56 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:44:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:44:56 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:44:56 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:44:56 compute-0 sudo[246295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:44:56 compute-0 sudo[246295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:56 compute-0 sudo[246295]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:56 compute-0 sudo[246320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:44:56 compute-0 sudo[246320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:56 compute-0 sudo[246320]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:56 compute-0 sudo[246357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:44:56 compute-0 sudo[246357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:56 compute-0 sudo[246357]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:56 compute-0 sudo[246405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:44:56 compute-0 sudo[246405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:57 compute-0 python3.9[246534]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:44:57 compute-0 sudo[246405]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:57 compute-0 nova_compute[245988]: 2026-01-21 23:44:57.215 245992 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 21 23:44:57 compute-0 nova_compute[245988]: 2026-01-21 23:44:57.215 245992 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 21 23:44:57 compute-0 nova_compute[245988]: 2026-01-21 23:44:57.215 245992 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 21 23:44:57 compute-0 nova_compute[245988]: 2026-01-21 23:44:57.216 245992 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 21 23:44:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:44:57 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:44:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:44:57 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:44:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:44:57 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:44:57 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev c2872b42-d13d-44f4-b25d-37671ebcb65c does not exist
Jan 21 23:44:57 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 9b208d4e-eac5-4391-b84f-ee4f3324bdf1 does not exist
Jan 21 23:44:57 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 2935c5cf-7740-4a9d-b58e-b04d11602d3c does not exist
Jan 21 23:44:57 compute-0 nova_compute[245988]: 2026-01-21 23:44:57.389 245992 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:44:57 compute-0 nova_compute[245988]: 2026-01-21 23:44:57.403 245992 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:44:57 compute-0 nova_compute[245988]: 2026-01-21 23:44:57.404 245992 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 21 23:44:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:44:57 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:44:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:44:57 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:44:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:44:57 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:44:57 compute-0 sudo[246581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:44:57 compute-0 sudo[246581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:57 compute-0 sudo[246581]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:44:57 compute-0 sudo[246629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:44:57 compute-0 sudo[246629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:57 compute-0 ceph-mon[74318]: pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:57 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:44:57 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:44:57 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:44:57 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:44:57 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:44:57 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:44:57 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:44:57 compute-0 sudo[246629]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:57 compute-0 sudo[246683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:44:57 compute-0 sudo[246683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:57 compute-0 sudo[246683]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:44:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:57.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:44:57 compute-0 sudo[246731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:44:57 compute-0 sudo[246731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:44:58 compute-0 python3.9[246806]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 23:44:58 compute-0 podman[246850]: 2026-01-21 23:44:58.143831946 +0000 UTC m=+0.086397625 container create 250101eeea58e3b5be22730420543b384e60af8b1d857c2a5f0551ec7c2d03ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ardinghelli, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 23:44:58 compute-0 podman[246850]: 2026-01-21 23:44:58.079624988 +0000 UTC m=+0.022190707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:44:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:58 compute-0 systemd[1]: Started libpod-conmon-250101eeea58e3b5be22730420543b384e60af8b1d857c2a5f0551ec7c2d03ce.scope.
Jan 21 23:44:58 compute-0 nova_compute[245988]: 2026-01-21 23:44:58.234 245992 INFO nova.virt.driver [None req-6a5cfa2d-7fa3-414a-840e-eb80d3925b63 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 21 23:44:58 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:44:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:44:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:44:58.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:44:58 compute-0 podman[246850]: 2026-01-21 23:44:58.321072541 +0000 UTC m=+0.263638230 container init 250101eeea58e3b5be22730420543b384e60af8b1d857c2a5f0551ec7c2d03ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ardinghelli, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:44:58 compute-0 podman[246850]: 2026-01-21 23:44:58.32965067 +0000 UTC m=+0.272216299 container start 250101eeea58e3b5be22730420543b384e60af8b1d857c2a5f0551ec7c2d03ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ardinghelli, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Jan 21 23:44:58 compute-0 nice_ardinghelli[246887]: 167 167
Jan 21 23:44:58 compute-0 systemd[1]: libpod-250101eeea58e3b5be22730420543b384e60af8b1d857c2a5f0551ec7c2d03ce.scope: Deactivated successfully.
Jan 21 23:44:58 compute-0 conmon[246887]: conmon 250101eeea58e3b5be22 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-250101eeea58e3b5be22730420543b384e60af8b1d857c2a5f0551ec7c2d03ce.scope/container/memory.events
Jan 21 23:44:58 compute-0 nova_compute[245988]: 2026-01-21 23:44:58.361 245992 INFO nova.compute.provider_config [None req-6a5cfa2d-7fa3-414a-840e-eb80d3925b63 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 21 23:44:58 compute-0 podman[246850]: 2026-01-21 23:44:58.395398417 +0000 UTC m=+0.337964076 container attach 250101eeea58e3b5be22730420543b384e60af8b1d857c2a5f0551ec7c2d03ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 23:44:58 compute-0 podman[246850]: 2026-01-21 23:44:58.395819871 +0000 UTC m=+0.338385510 container died 250101eeea58e3b5be22730420543b384e60af8b1d857c2a5f0551ec7c2d03ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Jan 21 23:44:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d96cf5e656e670ce30f4f1fff32f605003ec6847a7d457f8e533a2e614e24ab6-merged.mount: Deactivated successfully.
Jan 21 23:44:58 compute-0 podman[246850]: 2026-01-21 23:44:58.552546344 +0000 UTC m=+0.495111983 container remove 250101eeea58e3b5be22730420543b384e60af8b1d857c2a5f0551ec7c2d03ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ardinghelli, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 21 23:44:58 compute-0 systemd[1]: libpod-conmon-250101eeea58e3b5be22730420543b384e60af8b1d857c2a5f0551ec7c2d03ce.scope: Deactivated successfully.
Jan 21 23:44:58 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:44:58 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:44:58 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:44:58 compute-0 ceph-mon[74318]: pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:44:58 compute-0 podman[246963]: 2026-01-21 23:44:58.743754346 +0000 UTC m=+0.073154720 container create 199ab3db1872b0c737ab9f159c494af5f7a3cd8bdf9c7e45a7420020dd58107d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 23:44:58 compute-0 podman[246963]: 2026-01-21 23:44:58.696632352 +0000 UTC m=+0.026032746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:44:58 compute-0 systemd[1]: Started libpod-conmon-199ab3db1872b0c737ab9f159c494af5f7a3cd8bdf9c7e45a7420020dd58107d.scope.
Jan 21 23:44:58 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54537474bf6956c832538af5064960729f0211eb14b97d5c0cd0f66a709297e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54537474bf6956c832538af5064960729f0211eb14b97d5c0cd0f66a709297e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54537474bf6956c832538af5064960729f0211eb14b97d5c0cd0f66a709297e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54537474bf6956c832538af5064960729f0211eb14b97d5c0cd0f66a709297e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54537474bf6956c832538af5064960729f0211eb14b97d5c0cd0f66a709297e5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:44:58 compute-0 podman[246963]: 2026-01-21 23:44:58.957950938 +0000 UTC m=+0.287351392 container init 199ab3db1872b0c737ab9f159c494af5f7a3cd8bdf9c7e45a7420020dd58107d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 21 23:44:58 compute-0 podman[246963]: 2026-01-21 23:44:58.966837446 +0000 UTC m=+0.296237820 container start 199ab3db1872b0c737ab9f159c494af5f7a3cd8bdf9c7e45a7420020dd58107d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:44:59 compute-0 podman[246963]: 2026-01-21 23:44:59.011721091 +0000 UTC m=+0.341121555 container attach 199ab3db1872b0c737ab9f159c494af5f7a3cd8bdf9c7e45a7420020dd58107d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 23:44:59 compute-0 sudo[247057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enciqoyrdymhvlhfzsrebkqsksjfzeag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039098.53681-3571-169316000577043/AnsiballZ_podman_container.py'
Jan 21 23:44:59 compute-0 sudo[247057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:44:59 compute-0 python3.9[247059]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 21 23:44:59 compute-0 sudo[247057]: pam_unix(sudo:session): session closed for user root
Jan 21 23:44:59 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 23:44:59 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 23:44:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:44:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:44:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:44:59.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:44:59 compute-0 ecstatic_lichterman[247002]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:44:59 compute-0 ecstatic_lichterman[247002]: --> relative data size: 1.0
Jan 21 23:44:59 compute-0 ecstatic_lichterman[247002]: --> All data devices are unavailable
Jan 21 23:44:59 compute-0 systemd[1]: libpod-199ab3db1872b0c737ab9f159c494af5f7a3cd8bdf9c7e45a7420020dd58107d.scope: Deactivated successfully.
Jan 21 23:44:59 compute-0 podman[246963]: 2026-01-21 23:44:59.854434517 +0000 UTC m=+1.183834891 container died 199ab3db1872b0c737ab9f159c494af5f7a3cd8bdf9c7e45a7420020dd58107d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 21 23:44:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-54537474bf6956c832538af5064960729f0211eb14b97d5c0cd0f66a709297e5-merged.mount: Deactivated successfully.
Jan 21 23:44:59 compute-0 podman[246963]: 2026-01-21 23:44:59.961920509 +0000 UTC m=+1.291320893 container remove 199ab3db1872b0c737ab9f159c494af5f7a3cd8bdf9c7e45a7420020dd58107d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:44:59 compute-0 systemd[1]: libpod-conmon-199ab3db1872b0c737ab9f159c494af5f7a3cd8bdf9c7e45a7420020dd58107d.scope: Deactivated successfully.
Jan 21 23:45:00 compute-0 sudo[246731]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:00 compute-0 sudo[247205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:45:00 compute-0 sudo[247205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:45:00 compute-0 sudo[247205]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:00 compute-0 sudo[247254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:45:00 compute-0 sudo[247254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:45:00 compute-0 sudo[247254]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:00 compute-0 sudo[247304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttfcrmctytjjgwqkumfjrbcvopkemfun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039099.7517622-3595-279330330104641/AnsiballZ_systemd.py'
Jan 21 23:45:00 compute-0 sudo[247304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:45:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:00 compute-0 sudo[247307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:45:00 compute-0 sudo[247307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:45:00 compute-0 sudo[247307]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:00.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:00 compute-0 sudo[247333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:45:00 compute-0 sudo[247333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:45:00 compute-0 python3.9[247310]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 23:45:00 compute-0 systemd[1]: Stopping nova_compute container...
Jan 21 23:45:00 compute-0 systemd[1]: libpod-378445ba08333a0c0d2c90f4ba06a7f2a5bf640c06aa080a3598225440daaa3e.scope: Deactivated successfully.
Jan 21 23:45:00 compute-0 systemd[1]: libpod-378445ba08333a0c0d2c90f4ba06a7f2a5bf640c06aa080a3598225440daaa3e.scope: Consumed 2.740s CPU time.
Jan 21 23:45:00 compute-0 podman[247389]: 2026-01-21 23:45:00.671225392 +0000 UTC m=+0.083882376 container died 378445ba08333a0c0d2c90f4ba06a7f2a5bf640c06aa080a3598225440daaa3e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 23:45:00 compute-0 podman[247418]: 2026-01-21 23:45:00.762301321 +0000 UTC m=+0.111482389 container create 039c193209e373a358191350d9eb51f0b58f3386295401c73379f6598f9ac5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 23:45:00 compute-0 podman[247418]: 2026-01-21 23:45:00.672358987 +0000 UTC m=+0.021540055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:45:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-378445ba08333a0c0d2c90f4ba06a7f2a5bf640c06aa080a3598225440daaa3e-userdata-shm.mount: Deactivated successfully.
Jan 21 23:45:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff850024672c7c6bad188b038e310b567ac41c6db0d54bf9c898ddfda0c08a36-merged.mount: Deactivated successfully.
Jan 21 23:45:01 compute-0 podman[247389]: 2026-01-21 23:45:01.547334123 +0000 UTC m=+0.959991097 container cleanup 378445ba08333a0c0d2c90f4ba06a7f2a5bf640c06aa080a3598225440daaa3e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 21 23:45:01 compute-0 podman[247389]: nova_compute
Jan 21 23:45:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:01.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:01 compute-0 systemd[1]: Started libpod-conmon-039c193209e373a358191350d9eb51f0b58f3386295401c73379f6598f9ac5e5.scope.
Jan 21 23:45:01 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:45:01 compute-0 podman[247418]: 2026-01-21 23:45:01.991934303 +0000 UTC m=+1.341115391 container init 039c193209e373a358191350d9eb51f0b58f3386295401c73379f6598f9ac5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:45:01 compute-0 podman[247418]: 2026-01-21 23:45:01.999141879 +0000 UTC m=+1.348322947 container start 039c193209e373a358191350d9eb51f0b58f3386295401c73379f6598f9ac5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:45:02 compute-0 nervous_tu[247461]: 167 167
Jan 21 23:45:02 compute-0 systemd[1]: libpod-039c193209e373a358191350d9eb51f0b58f3386295401c73379f6598f9ac5e5.scope: Deactivated successfully.
Jan 21 23:45:02 compute-0 podman[247418]: 2026-01-21 23:45:02.056150382 +0000 UTC m=+1.405331540 container attach 039c193209e373a358191350d9eb51f0b58f3386295401c73379f6598f9ac5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:45:02 compute-0 podman[247418]: 2026-01-21 23:45:02.056900456 +0000 UTC m=+1.406081574 container died 039c193209e373a358191350d9eb51f0b58f3386295401c73379f6598f9ac5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:45:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-69ad0d0f540283021e3c7dfdcff9628b850d55798c089156c4fe6f4505fb4a5d-merged.mount: Deactivated successfully.
Jan 21 23:45:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:02 compute-0 podman[247418]: 2026-01-21 23:45:02.277447776 +0000 UTC m=+1.626628884 container remove 039c193209e373a358191350d9eb51f0b58f3386295401c73379f6598f9ac5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 21 23:45:02 compute-0 ceph-mon[74318]: pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:02.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:02 compute-0 systemd[1]: libpod-conmon-039c193209e373a358191350d9eb51f0b58f3386295401c73379f6598f9ac5e5.scope: Deactivated successfully.
Jan 21 23:45:02 compute-0 podman[247448]: nova_compute
Jan 21 23:45:02 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 21 23:45:02 compute-0 systemd[1]: Stopped nova_compute container.
Jan 21 23:45:02 compute-0 systemd[1]: Starting nova_compute container...
Jan 21 23:45:02 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff850024672c7c6bad188b038e310b567ac41c6db0d54bf9c898ddfda0c08a36/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 21 23:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff850024672c7c6bad188b038e310b567ac41c6db0d54bf9c898ddfda0c08a36/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 21 23:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff850024672c7c6bad188b038e310b567ac41c6db0d54bf9c898ddfda0c08a36/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 21 23:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff850024672c7c6bad188b038e310b567ac41c6db0d54bf9c898ddfda0c08a36/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 21 23:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff850024672c7c6bad188b038e310b567ac41c6db0d54bf9c898ddfda0c08a36/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 21 23:45:02 compute-0 podman[247502]: 2026-01-21 23:45:02.522623777 +0000 UTC m=+0.081518301 container create 70450899da044cb1e6783c596f42f8c27727531add79f5585dd0a5b7453dfcd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:45:02 compute-0 podman[247484]: 2026-01-21 23:45:02.527547161 +0000 UTC m=+0.148904380 container init 378445ba08333a0c0d2c90f4ba06a7f2a5bf640c06aa080a3598225440daaa3e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 21 23:45:02 compute-0 podman[247484]: 2026-01-21 23:45:02.538135942 +0000 UTC m=+0.159493131 container start 378445ba08333a0c0d2c90f4ba06a7f2a5bf640c06aa080a3598225440daaa3e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute)
Jan 21 23:45:02 compute-0 nova_compute[247516]: + sudo -E kolla_set_configs
Jan 21 23:45:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:45:02 compute-0 podman[247484]: nova_compute
Jan 21 23:45:02 compute-0 podman[247502]: 2026-01-21 23:45:02.485696182 +0000 UTC m=+0.044590756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:45:02 compute-0 systemd[1]: Started nova_compute container.
Jan 21 23:45:02 compute-0 systemd[1]: Started libpod-conmon-70450899da044cb1e6783c596f42f8c27727531add79f5585dd0a5b7453dfcd2.scope.
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Validating config file
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Copying service configuration files
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Deleting /etc/ceph
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Creating directory /etc/ceph
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Setting permission for /etc/ceph
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Writing out command to execute
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 21 23:45:02 compute-0 nova_compute[247516]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 21 23:45:02 compute-0 nova_compute[247516]: ++ cat /run_command
Jan 21 23:45:02 compute-0 sudo[247304]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:02 compute-0 nova_compute[247516]: + CMD=nova-compute
Jan 21 23:45:02 compute-0 nova_compute[247516]: + ARGS=
Jan 21 23:45:02 compute-0 nova_compute[247516]: + sudo kolla_copy_cacerts
Jan 21 23:45:02 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086d269733a6096d14fd5acf486ac2b6c9b4454ba365fa75e76424ff8ab82178/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086d269733a6096d14fd5acf486ac2b6c9b4454ba365fa75e76424ff8ab82178/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086d269733a6096d14fd5acf486ac2b6c9b4454ba365fa75e76424ff8ab82178/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086d269733a6096d14fd5acf486ac2b6c9b4454ba365fa75e76424ff8ab82178/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:45:02 compute-0 nova_compute[247516]: + [[ ! -n '' ]]
Jan 21 23:45:02 compute-0 nova_compute[247516]: + . kolla_extend_start
Jan 21 23:45:02 compute-0 nova_compute[247516]: + echo 'Running command: '\''nova-compute'\'''
Jan 21 23:45:02 compute-0 nova_compute[247516]: Running command: 'nova-compute'
Jan 21 23:45:02 compute-0 nova_compute[247516]: + umask 0022
Jan 21 23:45:02 compute-0 nova_compute[247516]: + exec nova-compute
Jan 21 23:45:02 compute-0 podman[247502]: 2026-01-21 23:45:02.673187598 +0000 UTC m=+0.232082152 container init 70450899da044cb1e6783c596f42f8c27727531add79f5585dd0a5b7453dfcd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_darwin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:45:02 compute-0 podman[247502]: 2026-01-21 23:45:02.682837179 +0000 UTC m=+0.241731703 container start 70450899da044cb1e6783c596f42f8c27727531add79f5585dd0a5b7453dfcd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:45:02 compute-0 podman[247502]: 2026-01-21 23:45:02.687850926 +0000 UTC m=+0.246745470 container attach 70450899da044cb1e6783c596f42f8c27727531add79f5585dd0a5b7453dfcd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_darwin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Jan 21 23:45:02 compute-0 sudo[247563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:45:02 compute-0 sudo[247563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:45:02 compute-0 sudo[247563]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:02 compute-0 sudo[247594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:45:02 compute-0 sudo[247594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:45:02 compute-0 sudo[247594]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:03 compute-0 sudo[247738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcukeysvntxokxkhpomliwldwbiozbtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769039102.9555717-3622-197734859015537/AnsiballZ_podman_container.py'
Jan 21 23:45:03 compute-0 sudo[247738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 23:45:03 compute-0 ceph-mon[74318]: pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]: {
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:     "1": [
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:         {
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:             "devices": [
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:                 "/dev/loop3"
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:             ],
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:             "lv_name": "ceph_lv0",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:             "lv_size": "7511998464",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:             "name": "ceph_lv0",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:             "tags": {
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:                 "ceph.cluster_name": "ceph",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:                 "ceph.crush_device_class": "",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:                 "ceph.encrypted": "0",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:                 "ceph.osd_id": "1",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:                 "ceph.type": "block",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:                 "ceph.vdo": "0"
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:             },
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:             "type": "block",
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:             "vg_name": "ceph_vg0"
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:         }
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]:     ]
Jan 21 23:45:03 compute-0 hardcore_darwin[247530]: }
Jan 21 23:45:03 compute-0 python3.9[247740]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 21 23:45:03 compute-0 systemd[1]: libpod-70450899da044cb1e6783c596f42f8c27727531add79f5585dd0a5b7453dfcd2.scope: Deactivated successfully.
Jan 21 23:45:03 compute-0 podman[247502]: 2026-01-21 23:45:03.549135194 +0000 UTC m=+1.108029758 container died 70450899da044cb1e6783c596f42f8c27727531add79f5585dd0a5b7453dfcd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:45:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-086d269733a6096d14fd5acf486ac2b6c9b4454ba365fa75e76424ff8ab82178-merged.mount: Deactivated successfully.
Jan 21 23:45:03 compute-0 podman[247502]: 2026-01-21 23:45:03.632186362 +0000 UTC m=+1.191080896 container remove 70450899da044cb1e6783c596f42f8c27727531add79f5585dd0a5b7453dfcd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:45:03 compute-0 systemd[1]: libpod-conmon-70450899da044cb1e6783c596f42f8c27727531add79f5585dd0a5b7453dfcd2.scope: Deactivated successfully.
Jan 21 23:45:03 compute-0 sudo[247333]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:03.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:03 compute-0 systemd[1]: Started libpod-conmon-fd388b2766020b9672df327e62e305d4d28a4e50e6a36d7cc455c2912573862a.scope.
Jan 21 23:45:03 compute-0 sudo[247782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:45:03 compute-0 sudo[247782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:45:03 compute-0 sudo[247782]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:03 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:45:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35973583d41618f85de0403ac109d1209a7b22d3e474d054cf01f34c87b28e98/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 21 23:45:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35973583d41618f85de0403ac109d1209a7b22d3e474d054cf01f34c87b28e98/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 21 23:45:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35973583d41618f85de0403ac109d1209a7b22d3e474d054cf01f34c87b28e98/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 21 23:45:03 compute-0 podman[247781]: 2026-01-21 23:45:03.831143737 +0000 UTC m=+0.129836544 container init fd388b2766020b9672df327e62e305d4d28a4e50e6a36d7cc455c2912573862a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute_init)
Jan 21 23:45:03 compute-0 podman[247781]: 2026-01-21 23:45:03.84050412 +0000 UTC m=+0.139196907 container start fd388b2766020b9672df327e62e305d4d28a4e50e6a36d7cc455c2912573862a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_id=edpm, managed_by=edpm_ansible)
Jan 21 23:45:03 compute-0 sudo[247823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:45:03 compute-0 sudo[247823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:45:03 compute-0 python3.9[247740]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 21 23:45:03 compute-0 sudo[247823]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:03 compute-0 nova_compute_init[247850]: INFO:nova_statedir:Applying nova statedir ownership
Jan 21 23:45:03 compute-0 nova_compute_init[247850]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 21 23:45:03 compute-0 nova_compute_init[247850]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 21 23:45:03 compute-0 nova_compute_init[247850]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 21 23:45:03 compute-0 nova_compute_init[247850]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 21 23:45:03 compute-0 nova_compute_init[247850]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 21 23:45:03 compute-0 nova_compute_init[247850]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 21 23:45:03 compute-0 nova_compute_init[247850]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 21 23:45:03 compute-0 nova_compute_init[247850]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 21 23:45:03 compute-0 nova_compute_init[247850]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 21 23:45:03 compute-0 nova_compute_init[247850]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 21 23:45:03 compute-0 nova_compute_init[247850]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 21 23:45:03 compute-0 nova_compute_init[247850]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 21 23:45:03 compute-0 nova_compute_init[247850]: INFO:nova_statedir:Nova statedir ownership complete
Jan 21 23:45:03 compute-0 sudo[247853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:45:03 compute-0 sudo[247853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:45:03 compute-0 sudo[247853]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:03 compute-0 systemd[1]: libpod-fd388b2766020b9672df327e62e305d4d28a4e50e6a36d7cc455c2912573862a.scope: Deactivated successfully.
Jan 21 23:45:04 compute-0 podman[247891]: 2026-01-21 23:45:04.032898639 +0000 UTC m=+0.088945703 container died fd388b2766020b9672df327e62e305d4d28a4e50e6a36d7cc455c2912573862a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 21 23:45:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd388b2766020b9672df327e62e305d4d28a4e50e6a36d7cc455c2912573862a-userdata-shm.mount: Deactivated successfully.
Jan 21 23:45:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-35973583d41618f85de0403ac109d1209a7b22d3e474d054cf01f34c87b28e98-merged.mount: Deactivated successfully.
Jan 21 23:45:04 compute-0 podman[247891]: 2026-01-21 23:45:04.060588496 +0000 UTC m=+0.116635540 container cleanup fd388b2766020b9672df327e62e305d4d28a4e50e6a36d7cc455c2912573862a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible)
Jan 21 23:45:04 compute-0 sudo[247895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:45:04 compute-0 sudo[247895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:45:04 compute-0 systemd[1]: libpod-conmon-fd388b2766020b9672df327e62e305d4d28a4e50e6a36d7cc455c2912573862a.scope: Deactivated successfully.
Jan 21 23:45:04 compute-0 sudo[247738]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:04.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:04 compute-0 podman[248003]: 2026-01-21 23:45:04.369034096 +0000 UTC m=+0.046518916 container create 13d000c964c7bb5e1555f353a92a0be3ebe786346e6c8c0ee30fe2468c0889cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 23:45:04 compute-0 systemd[1]: Started libpod-conmon-13d000c964c7bb5e1555f353a92a0be3ebe786346e6c8c0ee30fe2468c0889cc.scope.
Jan 21 23:45:04 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:45:04 compute-0 podman[248003]: 2026-01-21 23:45:04.347235724 +0000 UTC m=+0.024720524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:45:04 compute-0 podman[248003]: 2026-01-21 23:45:04.454799619 +0000 UTC m=+0.132284479 container init 13d000c964c7bb5e1555f353a92a0be3ebe786346e6c8c0ee30fe2468c0889cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 23:45:04 compute-0 podman[248003]: 2026-01-21 23:45:04.460619561 +0000 UTC m=+0.138104351 container start 13d000c964c7bb5e1555f353a92a0be3ebe786346e6c8c0ee30fe2468c0889cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 21 23:45:04 compute-0 podman[248003]: 2026-01-21 23:45:04.464250035 +0000 UTC m=+0.141734925 container attach 13d000c964c7bb5e1555f353a92a0be3ebe786346e6c8c0ee30fe2468c0889cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 21 23:45:04 compute-0 elegant_solomon[248020]: 167 167
Jan 21 23:45:04 compute-0 systemd[1]: libpod-13d000c964c7bb5e1555f353a92a0be3ebe786346e6c8c0ee30fe2468c0889cc.scope: Deactivated successfully.
Jan 21 23:45:04 compute-0 podman[248003]: 2026-01-21 23:45:04.466362651 +0000 UTC m=+0.143847461 container died 13d000c964c7bb5e1555f353a92a0be3ebe786346e6c8c0ee30fe2468c0889cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:45:04 compute-0 podman[248003]: 2026-01-21 23:45:04.502172251 +0000 UTC m=+0.179657061 container remove 13d000c964c7bb5e1555f353a92a0be3ebe786346e6c8c0ee30fe2468c0889cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 21 23:45:04 compute-0 systemd[1]: libpod-conmon-13d000c964c7bb5e1555f353a92a0be3ebe786346e6c8c0ee30fe2468c0889cc.scope: Deactivated successfully.
Jan 21 23:45:04 compute-0 podman[248044]: 2026-01-21 23:45:04.679758558 +0000 UTC m=+0.050519612 container create e2cbb50e73b6455e3bd1283b0c35184d8fe8eccb924736846f9cfd97035c3dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ishizaka, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:45:04 compute-0 systemd[1]: Started libpod-conmon-e2cbb50e73b6455e3bd1283b0c35184d8fe8eccb924736846f9cfd97035c3dec.scope.
Jan 21 23:45:04 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efd460bd6d0a5c3039ba4404dd5f97d909c6975225e1c330e60da53385260947/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efd460bd6d0a5c3039ba4404dd5f97d909c6975225e1c330e60da53385260947/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efd460bd6d0a5c3039ba4404dd5f97d909c6975225e1c330e60da53385260947/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efd460bd6d0a5c3039ba4404dd5f97d909c6975225e1c330e60da53385260947/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:45:04 compute-0 podman[248044]: 2026-01-21 23:45:04.663752177 +0000 UTC m=+0.034513261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:45:04 compute-0 podman[248044]: 2026-01-21 23:45:04.767469113 +0000 UTC m=+0.138230197 container init e2cbb50e73b6455e3bd1283b0c35184d8fe8eccb924736846f9cfd97035c3dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:45:04 compute-0 nova_compute[247516]: 2026-01-21 23:45:04.793 247523 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 21 23:45:04 compute-0 nova_compute[247516]: 2026-01-21 23:45:04.794 247523 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 21 23:45:04 compute-0 nova_compute[247516]: 2026-01-21 23:45:04.794 247523 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 21 23:45:04 compute-0 nova_compute[247516]: 2026-01-21 23:45:04.794 247523 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 21 23:45:04 compute-0 podman[248044]: 2026-01-21 23:45:04.818594011 +0000 UTC m=+0.189355095 container start e2cbb50e73b6455e3bd1283b0c35184d8fe8eccb924736846f9cfd97035c3dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ishizaka, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:45:04 compute-0 sshd-session[221465]: Connection closed by 192.168.122.30 port 39350
Jan 21 23:45:04 compute-0 podman[248044]: 2026-01-21 23:45:04.82270117 +0000 UTC m=+0.193462244 container attach e2cbb50e73b6455e3bd1283b0c35184d8fe8eccb924736846f9cfd97035c3dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ishizaka, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:45:04 compute-0 sshd-session[221462]: pam_unix(sshd:session): session closed for user zuul
Jan 21 23:45:04 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Jan 21 23:45:04 compute-0 systemd[1]: session-50.scope: Consumed 2min 17.211s CPU time.
Jan 21 23:45:04 compute-0 systemd-logind[786]: Session 50 logged out. Waiting for processes to exit.
Jan 21 23:45:04 compute-0 systemd-logind[786]: Removed session 50.
Jan 21 23:45:04 compute-0 nova_compute[247516]: 2026-01-21 23:45:04.952 247523 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:45:04 compute-0 nova_compute[247516]: 2026-01-21 23:45:04.981 247523 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:45:04 compute-0 nova_compute[247516]: 2026-01-21 23:45:04.982 247523 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 21 23:45:05 compute-0 ceph-mon[74318]: pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:05 compute-0 beautiful_ishizaka[248063]: {
Jan 21 23:45:05 compute-0 beautiful_ishizaka[248063]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:45:05 compute-0 beautiful_ishizaka[248063]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:45:05 compute-0 beautiful_ishizaka[248063]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:45:05 compute-0 beautiful_ishizaka[248063]:         "osd_id": 1,
Jan 21 23:45:05 compute-0 beautiful_ishizaka[248063]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:45:05 compute-0 beautiful_ishizaka[248063]:         "type": "bluestore"
Jan 21 23:45:05 compute-0 beautiful_ishizaka[248063]:     }
Jan 21 23:45:05 compute-0 beautiful_ishizaka[248063]: }
Jan 21 23:45:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:05.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:05 compute-0 systemd[1]: libpod-e2cbb50e73b6455e3bd1283b0c35184d8fe8eccb924736846f9cfd97035c3dec.scope: Deactivated successfully.
Jan 21 23:45:05 compute-0 podman[248044]: 2026-01-21 23:45:05.722889964 +0000 UTC m=+1.093651068 container died e2cbb50e73b6455e3bd1283b0c35184d8fe8eccb924736846f9cfd97035c3dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:45:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-efd460bd6d0a5c3039ba4404dd5f97d909c6975225e1c330e60da53385260947-merged.mount: Deactivated successfully.
Jan 21 23:45:05 compute-0 podman[248044]: 2026-01-21 23:45:05.791722659 +0000 UTC m=+1.162483753 container remove e2cbb50e73b6455e3bd1283b0c35184d8fe8eccb924736846f9cfd97035c3dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:45:05 compute-0 systemd[1]: libpod-conmon-e2cbb50e73b6455e3bd1283b0c35184d8fe8eccb924736846f9cfd97035c3dec.scope: Deactivated successfully.
Jan 21 23:45:05 compute-0 nova_compute[247516]: 2026-01-21 23:45:05.820 247523 INFO nova.virt.driver [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 21 23:45:05 compute-0 sudo[247895]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:45:05 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:45:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:45:05 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:45:05 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 81a224b0-1ba4-40bf-ac0f-5514979bd4ea does not exist
Jan 21 23:45:05 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev ef4953a0-fe72-445f-8b39-a2b06f3be1d5 does not exist
Jan 21 23:45:05 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev ef29cfee-7aa3-4e72-8351-c6eca130e527 does not exist
Jan 21 23:45:05 compute-0 sudo[248101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:45:05 compute-0 sudo[248101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:45:05 compute-0 sudo[248101]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:05 compute-0 nova_compute[247516]: 2026-01-21 23:45:05.963 247523 INFO nova.compute.provider_config [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 21 23:45:06 compute-0 sudo[248126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:45:06 compute-0 sudo[248126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:45:06 compute-0 sudo[248126]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:06.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:06 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:45:06 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:45:06 compute-0 ceph-mon[74318]: pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:45:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:45:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:07.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:45:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:45:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:08.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:45:09 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 23:45:09 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 6477 writes, 26K keys, 6477 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6477 writes, 1210 syncs, 5.35 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 454 writes, 678 keys, 454 commit groups, 1.0 writes per commit group, ingest: 0.22 MB, 0.00 MB/s
                                           Interval WAL: 454 writes, 220 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55889aee9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 23:45:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:45:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:45:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:45:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:45:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:45:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:45:09 compute-0 ceph-mon[74318]: pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:09.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:10.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:11 compute-0 ceph-mon[74318]: pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.370 247523 DEBUG oslo_concurrency.lockutils [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.371 247523 DEBUG oslo_concurrency.lockutils [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.372 247523 DEBUG oslo_concurrency.lockutils [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.372 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.373 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.373 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.373 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.373 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.374 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.374 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.374 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.374 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.375 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.375 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.375 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.375 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.376 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.376 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.376 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.376 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.377 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.377 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.377 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.377 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.377 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.378 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.378 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.378 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.378 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.379 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.379 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.379 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.379 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.379 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.380 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.380 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.380 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.381 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.381 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.381 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.381 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.382 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.382 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.382 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.383 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.383 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.383 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.383 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.384 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.384 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.384 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.384 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.385 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.385 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.385 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.385 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.386 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.386 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.386 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.387 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.387 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.387 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.387 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.388 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.388 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.388 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.388 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.389 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.389 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.389 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.389 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.389 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.390 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.390 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.390 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.391 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.391 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.391 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.391 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.391 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.392 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.392 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.392 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.392 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.393 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.393 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.393 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.393 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.394 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.394 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.394 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.394 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.394 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.395 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.395 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.395 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.395 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.395 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.396 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.396 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.396 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.397 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.397 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.397 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.397 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.398 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.398 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.398 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.398 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.398 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.399 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.399 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.399 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.400 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.400 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.400 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.400 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.400 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.401 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.401 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.401 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.401 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.402 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.402 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.402 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.403 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.403 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.403 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.403 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.404 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.404 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.404 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.404 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.405 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.405 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.405 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.405 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.406 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.406 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.406 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.406 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.407 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.407 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.407 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.407 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.407 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.408 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.408 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.408 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.408 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.409 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.409 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.409 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.410 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.410 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.410 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.411 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.411 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.411 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.411 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.412 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.412 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.412 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.412 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.412 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.413 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.413 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.413 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.413 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.414 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.414 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.414 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.414 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.414 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.415 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.415 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.415 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.415 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.415 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.416 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.416 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.416 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.417 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.417 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.417 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.417 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.417 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.418 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.418 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.418 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.418 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.419 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.419 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.419 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.419 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.420 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.420 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.420 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.420 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.421 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.421 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.421 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.421 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.421 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.422 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.422 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.422 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.423 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.423 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.423 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.423 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.424 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.424 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.424 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.424 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.425 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.425 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.425 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.425 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.426 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.426 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.426 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.426 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.427 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.427 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.427 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.427 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.427 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.428 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.428 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.428 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.429 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.429 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.429 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.429 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.430 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.430 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.430 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.431 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.431 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.431 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.431 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.432 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.432 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.432 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.433 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.433 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.433 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.433 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.433 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.434 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.434 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.434 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.434 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.434 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.434 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.435 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.435 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.435 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.435 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.435 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.436 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.436 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.436 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.436 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.436 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.436 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.437 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.437 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.437 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.437 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.437 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.437 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.437 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.438 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.438 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.438 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.438 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.438 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.438 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.438 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.439 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.439 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.439 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.439 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.439 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.439 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.439 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.440 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.440 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.440 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.440 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.440 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.440 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.441 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.441 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.441 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.441 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.441 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.441 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.442 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.442 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.442 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.442 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.442 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.442 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.443 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.443 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.443 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.443 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.443 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.443 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.443 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.444 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.444 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.444 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.444 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.444 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.444 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.444 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.445 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.445 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.445 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.445 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.445 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.445 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.446 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.446 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.446 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.446 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.446 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.446 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.446 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.447 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.447 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.447 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.447 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.447 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.447 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.448 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.448 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.448 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.448 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.448 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.448 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.449 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.449 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.449 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.449 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.449 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.449 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.450 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.450 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.450 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.450 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.450 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.451 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.451 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.451 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.452 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.452 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.452 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.452 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.452 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.452 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.453 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.453 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.453 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.453 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.453 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.453 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.453 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.454 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.454 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.454 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.454 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.454 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.454 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.455 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.455 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.455 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.455 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.455 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.455 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.456 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.456 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.456 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.456 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.456 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.456 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.457 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.457 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.457 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.457 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.457 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.458 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.458 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.458 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.458 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.458 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.458 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.459 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.459 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.459 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.459 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.459 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.459 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.460 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.460 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.461 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.462 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.462 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.463 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.463 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.464 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.464 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.464 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.465 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.465 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.465 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.466 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.466 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.466 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.467 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.467 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.467 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.468 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.468 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.468 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.469 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.469 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.469 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.469 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.470 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.470 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.470 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.471 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.471 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.471 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.472 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.472 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.472 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.472 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.473 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.473 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.473 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.474 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.474 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.474 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.474 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.475 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.475 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.475 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.476 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.476 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.477 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.477 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.477 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.478 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.478 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.478 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.479 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.479 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.479 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.480 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.480 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.481 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.481 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.481 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.482 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.482 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.483 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.483 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.483 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.484 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.484 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.484 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.485 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.485 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.485 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.486 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.486 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.486 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.487 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.487 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.487 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.487 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.488 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.488 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.488 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.489 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.489 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.489 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.490 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.490 247523 WARNING oslo_config.cfg [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 21 23:45:11 compute-0 nova_compute[247516]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 21 23:45:11 compute-0 nova_compute[247516]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 21 23:45:11 compute-0 nova_compute[247516]: and ``live_migration_inbound_addr`` respectively.
Jan 21 23:45:11 compute-0 nova_compute[247516]: ).  Its value may be silently ignored in the future.
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.491 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.491 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.491 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.492 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.492 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.492 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.493 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.493 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.493 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.493 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.494 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.494 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.494 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.495 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.495 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.496 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.496 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.497 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.497 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.rbd_secret_uuid        = 3759241a-7f1c-520d-ba17-879943ee2f00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.498 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.498 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.498 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.499 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.499 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.499 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.500 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.500 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.500 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.501 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.501 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.501 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.502 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.502 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.503 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.503 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.504 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.504 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.505 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.505 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.505 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.506 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.506 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.507 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.507 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.508 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.508 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.509 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.509 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.510 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.510 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.511 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.511 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.512 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.512 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.513 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.513 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.513 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.514 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.514 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.515 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.515 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.515 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.516 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.516 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.516 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.516 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.517 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.517 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.517 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.518 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.518 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.518 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.519 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.519 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.519 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.520 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.520 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.520 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.520 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.521 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.521 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.521 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.522 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.522 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.522 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.523 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.523 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.523 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.524 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.524 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.524 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.525 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.525 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.525 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.525 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.526 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.526 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.526 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.527 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.527 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.527 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.527 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.528 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.528 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.528 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.529 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.529 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.529 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.530 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.530 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.530 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.531 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.531 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.531 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.531 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.532 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.532 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.533 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.533 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.533 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.534 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.534 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.534 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.534 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.535 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.535 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.535 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.536 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.536 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.536 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.537 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.537 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.537 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.538 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.538 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.538 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.538 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.539 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.539 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.540 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.540 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.540 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.541 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.541 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.541 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.542 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.542 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.542 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.542 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.543 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.543 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.543 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.543 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.544 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.544 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.544 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.544 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.545 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.545 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.545 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.546 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.546 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.546 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.547 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.547 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.547 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.548 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.548 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.548 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.549 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.549 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.549 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.549 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.550 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.550 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.550 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.551 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.551 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.551 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.551 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.552 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.552 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.552 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.552 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.552 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.553 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.553 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.553 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.553 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.553 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.554 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.554 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.554 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.554 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.554 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.555 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.555 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.555 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.556 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.556 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.556 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.556 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.556 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.556 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.557 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.557 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.557 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.557 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.557 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.557 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.557 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.558 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.558 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.558 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.558 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.558 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.558 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.558 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.559 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.559 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.559 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.559 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.559 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.559 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.560 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.560 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.560 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.560 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.560 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.560 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.561 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.561 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.561 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.561 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.561 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.561 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.561 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.562 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.562 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.562 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.562 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.562 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.562 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.562 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.563 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.563 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.563 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.563 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.563 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.563 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.564 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.564 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.564 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.564 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.564 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.564 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.565 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.565 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.565 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.565 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.565 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.566 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.566 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.566 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.566 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.566 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.566 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.567 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.567 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.567 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.567 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.567 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.567 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.567 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.568 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.568 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.568 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.568 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.568 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.568 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.569 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.569 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.569 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.569 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.569 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.569 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.570 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.570 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.570 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.570 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.592 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.593 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.593 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.593 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.593 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.593 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.594 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.594 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.594 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.594 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.594 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.594 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.595 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.595 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.595 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.595 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.595 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.595 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.596 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.596 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.596 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.596 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.596 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.596 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.597 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.597 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.597 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.597 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.597 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.597 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.598 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.598 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.598 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.598 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.599 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.599 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.599 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.599 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.599 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.600 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.600 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.600 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.600 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.600 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.601 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.601 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.601 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.601 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.601 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.602 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.602 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.602 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.602 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.603 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.603 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.603 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.603 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.603 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.603 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.604 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.604 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.604 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.604 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.604 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.604 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.604 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.605 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.605 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.605 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.605 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.605 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.605 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.606 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.606 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.606 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.606 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.606 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.606 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.607 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.607 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.607 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.607 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.607 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.607 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.607 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.608 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.608 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.608 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.608 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.608 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.609 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.609 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.609 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.609 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.609 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.609 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.610 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.610 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.610 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.610 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.610 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.610 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.611 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.611 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.611 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.611 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.611 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.611 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.612 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.612 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.612 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.612 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.612 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.612 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.613 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.613 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.613 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.613 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.613 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.613 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.614 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.614 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.614 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.614 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.614 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.614 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.614 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.615 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.615 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.615 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.615 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.615 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.615 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.615 247523 DEBUG oslo_service.service [None req-c6e8991d-18ec-403b-9551-6a8c8caaf10b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.617 247523 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.633 247523 DEBUG nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.634 247523 DEBUG nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.634 247523 DEBUG nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.635 247523 DEBUG nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 21 23:45:11 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 21 23:45:11 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 21 23:45:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:11.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.729 247523 DEBUG nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fe132449b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.734 247523 DEBUG nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fe132449b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.734 247523 INFO nova.virt.libvirt.driver [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Connection event '1' reason 'None'
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.750 247523 WARNING nova.virt.libvirt.driver [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 21 23:45:11 compute-0 nova_compute[247516]: 2026-01-21 23:45:11.750 247523 DEBUG nova.virt.libvirt.volume.mount [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 21 23:45:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:12.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:12 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/538938736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:45:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:45:12 compute-0 nova_compute[247516]: 2026-01-21 23:45:12.770 247523 INFO nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Libvirt host capabilities <capabilities>
Jan 21 23:45:12 compute-0 nova_compute[247516]: 
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <host>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <uuid>31160826-6141-46dc-a546-fae3354f7966</uuid>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <cpu>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <arch>x86_64</arch>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model>EPYC-Rome-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <vendor>AMD</vendor>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <microcode version='16777317'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <signature family='23' model='49' stepping='0'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='x2apic'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='tsc-deadline'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='osxsave'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='hypervisor'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='tsc_adjust'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='spec-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='stibp'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='arch-capabilities'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='ssbd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='cmp_legacy'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='topoext'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='virt-ssbd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='lbrv'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='tsc-scale'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='vmcb-clean'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='pause-filter'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='pfthreshold'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='svme-addr-chk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='rdctl-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='skip-l1dfl-vmentry'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='mds-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature name='pschange-mc-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <pages unit='KiB' size='4'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <pages unit='KiB' size='2048'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <pages unit='KiB' size='1048576'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </cpu>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <power_management>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <suspend_mem/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </power_management>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <iommu support='no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <migration_features>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <live/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <uri_transports>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <uri_transport>tcp</uri_transport>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <uri_transport>rdma</uri_transport>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </uri_transports>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </migration_features>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <topology>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <cells num='1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <cell id='0'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:           <memory unit='KiB'>7864308</memory>
Jan 21 23:45:12 compute-0 nova_compute[247516]:           <pages unit='KiB' size='4'>1966077</pages>
Jan 21 23:45:12 compute-0 nova_compute[247516]:           <pages unit='KiB' size='2048'>0</pages>
Jan 21 23:45:12 compute-0 nova_compute[247516]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 21 23:45:12 compute-0 nova_compute[247516]:           <distances>
Jan 21 23:45:12 compute-0 nova_compute[247516]:             <sibling id='0' value='10'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:           </distances>
Jan 21 23:45:12 compute-0 nova_compute[247516]:           <cpus num='8'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:           </cpus>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         </cell>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </cells>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </topology>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <cache>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </cache>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <secmodel>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model>selinux</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <doi>0</doi>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </secmodel>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <secmodel>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model>dac</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <doi>0</doi>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </secmodel>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   </host>
Jan 21 23:45:12 compute-0 nova_compute[247516]: 
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <guest>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <os_type>hvm</os_type>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <arch name='i686'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <wordsize>32</wordsize>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <domain type='qemu'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <domain type='kvm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </arch>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <features>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <pae/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <nonpae/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <acpi default='on' toggle='yes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <apic default='on' toggle='no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <cpuselection/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <deviceboot/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <disksnapshot default='on' toggle='no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <externalSnapshot/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </features>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   </guest>
Jan 21 23:45:12 compute-0 nova_compute[247516]: 
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <guest>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <os_type>hvm</os_type>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <arch name='x86_64'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <wordsize>64</wordsize>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <domain type='qemu'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <domain type='kvm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </arch>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <features>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <acpi default='on' toggle='yes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <apic default='on' toggle='no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <cpuselection/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <deviceboot/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <disksnapshot default='on' toggle='no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <externalSnapshot/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </features>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   </guest>
Jan 21 23:45:12 compute-0 nova_compute[247516]: 
Jan 21 23:45:12 compute-0 nova_compute[247516]: </capabilities>
Jan 21 23:45:12 compute-0 nova_compute[247516]: 
Jan 21 23:45:12 compute-0 nova_compute[247516]: 2026-01-21 23:45:12.777 247523 DEBUG nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 21 23:45:12 compute-0 nova_compute[247516]: 2026-01-21 23:45:12.810 247523 DEBUG nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 21 23:45:12 compute-0 nova_compute[247516]: <domainCapabilities>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <path>/usr/libexec/qemu-kvm</path>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <domain>kvm</domain>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <arch>i686</arch>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <vcpu max='240'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <iothreads supported='yes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <os supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <enum name='firmware'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <loader supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>rom</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>pflash</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='readonly'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>yes</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>no</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='secure'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>no</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </loader>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   </os>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <cpu>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <mode name='host-passthrough' supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='hostPassthroughMigratable'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>on</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>off</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </mode>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <mode name='maximum' supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='maximumMigratable'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>on</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>off</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </mode>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <mode name='host-model' supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <vendor>AMD</vendor>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='x2apic'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='tsc-deadline'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='hypervisor'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='tsc_adjust'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='spec-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='stibp'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='ssbd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='cmp_legacy'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='overflow-recov'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='succor'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='ibrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='amd-ssbd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='virt-ssbd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='lbrv'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='tsc-scale'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='vmcb-clean'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='flushbyasid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='pause-filter'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='pfthreshold'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='svme-addr-chk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='disable' name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </mode>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <mode name='custom' supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Broadwell'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Broadwell-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Broadwell-noTSX'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Broadwell-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Broadwell-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Broadwell-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Broadwell-v4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v5'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='ClearwaterForest'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bhi-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bhi-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ddpd-u'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='intel-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ipred-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='lam'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rrsba-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sha512'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sm3'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sm4'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='ClearwaterForest-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bhi-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bhi-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ddpd-u'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='intel-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ipred-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='lam'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rrsba-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sha512'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sm3'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sm4'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cooperlake'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cooperlake-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cooperlake-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Denverton'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mpx'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Denverton-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mpx'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Denverton-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Denverton-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Dhyana-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Genoa'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Genoa-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Genoa-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fs-gs-base-ns'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='perfmon-v2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Milan'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Milan-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Milan-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Milan-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Rome'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Rome-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Rome-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Rome-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Turin'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vp2intersect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fs-gs-base-ns'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibpb-brtype'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='perfmon-v2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='prefetchi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbpb'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='srso-user-kernel-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Turin-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vp2intersect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fs-gs-base-ns'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibpb-brtype'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='perfmon-v2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='prefetchi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbpb'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='srso-user-kernel-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-v4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-v5'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='GraniteRapids'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='GraniteRapids-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='GraniteRapids-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx10'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx10-128'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx10-256'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx10-512'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='GraniteRapids-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx10'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx10-128'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx10-256'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx10-512'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Haswell'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Haswell-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Haswell-noTSX'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Haswell-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Haswell-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Haswell-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Haswell-v4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-noTSX'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v5'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v6'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v7'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='IvyBridge'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='IvyBridge-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='IvyBridge-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='IvyBridge-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='KnightsMill'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-4fmaps'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-4vnniw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512er'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512pf'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='KnightsMill-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-4fmaps'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-4vnniw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512er'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512pf'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Opteron_G4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fma4'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xop'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Opteron_G4-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fma4'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xop'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Opteron_G5'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fma4'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tbm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xop'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Opteron_G5-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fma4'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tbm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xop'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids-v4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SierraForest'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SierraForest-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SierraForest-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bhi-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='intel-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ipred-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='lam'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rrsba-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SierraForest-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bhi-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='intel-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ipred-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='lam'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rrsba-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-v4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v5'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Snowridge'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='core-capability'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mpx'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='split-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Snowridge-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='core-capability'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mpx'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='split-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Snowridge-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='core-capability'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='split-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Snowridge-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='core-capability'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='split-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Snowridge-v4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='athlon'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='3dnow'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='3dnowext'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='athlon-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='3dnow'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='3dnowext'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='core2duo'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='core2duo-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='coreduo'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='coreduo-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='n270'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='n270-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='phenom'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='3dnow'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='3dnowext'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='phenom-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='3dnow'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='3dnowext'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </mode>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   </cpu>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <memoryBacking supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <enum name='sourceType'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <value>file</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <value>anonymous</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <value>memfd</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   </memoryBacking>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <devices>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <disk supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='diskDevice'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>disk</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>cdrom</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>floppy</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>lun</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='bus'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>ide</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>fdc</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>scsi</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>virtio</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>usb</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>sata</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='model'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>virtio</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>virtio-transitional</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>virtio-non-transitional</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </disk>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <graphics supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>vnc</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>egl-headless</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>dbus</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </graphics>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <video supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='modelType'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>vga</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>cirrus</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>virtio</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>none</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>bochs</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>ramfb</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </video>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <hostdev supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='mode'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>subsystem</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='startupPolicy'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>default</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>mandatory</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>requisite</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>optional</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='subsysType'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>usb</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>pci</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>scsi</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='capsType'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='pciBackend'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </hostdev>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <rng supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='model'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>virtio</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>virtio-transitional</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>virtio-non-transitional</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='backendModel'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>random</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>egd</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>builtin</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </rng>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <filesystem supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='driverType'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>path</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>handle</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>virtiofs</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </filesystem>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <tpm supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='model'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>tpm-tis</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>tpm-crb</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='backendModel'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>emulator</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>external</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='backendVersion'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>2.0</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </tpm>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <redirdev supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='bus'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>usb</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </redirdev>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <channel supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>pty</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>unix</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </channel>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <crypto supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='model'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>qemu</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='backendModel'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>builtin</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </crypto>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <interface supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='backendType'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>default</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>passt</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </interface>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <panic supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='model'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>isa</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>hyperv</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </panic>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <console supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>null</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>vc</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>pty</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>dev</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>file</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>pipe</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>stdio</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>udp</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>tcp</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>unix</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>qemu-vdagent</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>dbus</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </console>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   </devices>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <features>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <gic supported='no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <vmcoreinfo supported='yes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <genid supported='yes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <backingStoreInput supported='yes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <backup supported='yes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <async-teardown supported='yes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <s390-pv supported='no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <ps2 supported='yes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <tdx supported='no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <sev supported='no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <sgx supported='no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <hyperv supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='features'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>relaxed</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>vapic</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>spinlocks</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>vpindex</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>runtime</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>synic</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>stimer</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>reset</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>vendor_id</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>frequencies</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>reenlightenment</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>tlbflush</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>ipi</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>avic</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>emsr_bitmap</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>xmm_input</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <defaults>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <spinlocks>4095</spinlocks>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <stimer_direct>on</stimer_direct>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <tlbflush_direct>on</tlbflush_direct>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <tlbflush_extended>on</tlbflush_extended>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </defaults>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </hyperv>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <launchSecurity supported='no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   </features>
Jan 21 23:45:12 compute-0 nova_compute[247516]: </domainCapabilities>
Jan 21 23:45:12 compute-0 nova_compute[247516]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 21 23:45:12 compute-0 nova_compute[247516]: 2026-01-21 23:45:12.832 247523 DEBUG nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 21 23:45:12 compute-0 nova_compute[247516]: <domainCapabilities>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <path>/usr/libexec/qemu-kvm</path>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <domain>kvm</domain>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <arch>i686</arch>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <vcpu max='4096'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <iothreads supported='yes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <os supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <enum name='firmware'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <loader supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>rom</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>pflash</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='readonly'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>yes</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>no</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='secure'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>no</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </loader>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   </os>
Jan 21 23:45:12 compute-0 nova_compute[247516]:   <cpu>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <mode name='host-passthrough' supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='hostPassthroughMigratable'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>on</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>off</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </mode>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <mode name='maximum' supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <enum name='maximumMigratable'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>on</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <value>off</value>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </mode>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <mode name='host-model' supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <vendor>AMD</vendor>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='x2apic'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='tsc-deadline'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='hypervisor'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='tsc_adjust'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='spec-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='stibp'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='ssbd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='cmp_legacy'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='overflow-recov'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='succor'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='ibrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='amd-ssbd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='virt-ssbd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='lbrv'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='tsc-scale'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='vmcb-clean'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='flushbyasid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='pause-filter'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='pfthreshold'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='svme-addr-chk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <feature policy='disable' name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     </mode>
Jan 21 23:45:12 compute-0 nova_compute[247516]:     <mode name='custom' supported='yes'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Broadwell'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Broadwell-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Broadwell-noTSX'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Broadwell-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Broadwell-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Broadwell-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Broadwell-v4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v5'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='ClearwaterForest'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bhi-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bhi-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ddpd-u'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='intel-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ipred-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='lam'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rrsba-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sha512'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sm3'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sm4'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='ClearwaterForest-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bhi-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bhi-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ddpd-u'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='intel-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ipred-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='lam'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rrsba-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sha512'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sm3'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sm4'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cooperlake'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cooperlake-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Cooperlake-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Denverton'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mpx'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Denverton-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mpx'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Denverton-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Denverton-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Dhyana-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Genoa'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Genoa-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Genoa-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fs-gs-base-ns'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='perfmon-v2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Milan'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Milan-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Milan-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Milan-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Rome'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Rome-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Rome-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Rome-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Turin'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vp2intersect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fs-gs-base-ns'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibpb-brtype'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='perfmon-v2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='prefetchi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbpb'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='srso-user-kernel-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-Turin-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vp2intersect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fs-gs-base-ns'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibpb-brtype'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='perfmon-v2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='prefetchi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbpb'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='srso-user-kernel-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-v4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='EPYC-v5'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='GraniteRapids'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='GraniteRapids-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='GraniteRapids-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx10'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx10-128'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx10-256'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx10-512'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='GraniteRapids-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx10'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx10-128'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx10-256'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx10-512'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Haswell'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Haswell-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Haswell-noTSX'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Haswell-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Haswell-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Haswell-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Haswell-v4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-noTSX'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v5'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v6'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v7'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='IvyBridge'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='IvyBridge-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='IvyBridge-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='IvyBridge-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='KnightsMill'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-4fmaps'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-4vnniw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512er'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512pf'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='KnightsMill-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-4fmaps'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-4vnniw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512er'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512pf'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Opteron_G4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fma4'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xop'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Opteron_G4-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fma4'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xop'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Opteron_G5'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fma4'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tbm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xop'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Opteron_G5-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fma4'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tbm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xop'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids-v4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SierraForest'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SierraForest-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SierraForest-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bhi-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='intel-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ipred-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='lam'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rrsba-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='SierraForest-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bhi-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='intel-psfd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ipred-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='lam'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rrsba-ctrl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-v4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v1'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v2'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v3'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v4'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v5'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 23:45:12 compute-0 nova_compute[247516]:       <blockers model='Snowridge'>
Jan 21 23:45:12 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='core-capability'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mpx'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='split-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Snowridge-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='core-capability'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mpx'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='split-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Snowridge-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='core-capability'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='split-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Snowridge-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='core-capability'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='split-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Snowridge-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='athlon'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnow'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnowext'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='athlon-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnow'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnowext'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='core2duo'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='core2duo-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='coreduo'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='coreduo-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='n270'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='n270-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='phenom'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnow'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnowext'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='phenom-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnow'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnowext'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </mode>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   </cpu>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <memoryBacking supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <enum name='sourceType'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <value>file</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <value>anonymous</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <value>memfd</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   </memoryBacking>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <devices>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <disk supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='diskDevice'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>disk</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>cdrom</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>floppy</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>lun</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='bus'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>fdc</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>scsi</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>usb</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>sata</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='model'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio-transitional</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio-non-transitional</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </disk>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <graphics supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vnc</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>egl-headless</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>dbus</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </graphics>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <video supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='modelType'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vga</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>cirrus</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>none</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>bochs</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>ramfb</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </video>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <hostdev supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='mode'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>subsystem</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='startupPolicy'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>default</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>mandatory</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>requisite</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>optional</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='subsysType'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>usb</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>pci</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>scsi</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='capsType'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='pciBackend'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </hostdev>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <rng supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='model'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio-transitional</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio-non-transitional</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='backendModel'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>random</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>egd</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>builtin</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </rng>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <filesystem supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='driverType'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>path</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>handle</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtiofs</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </filesystem>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <tpm supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='model'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>tpm-tis</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>tpm-crb</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='backendModel'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>emulator</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>external</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='backendVersion'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>2.0</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </tpm>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <redirdev supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='bus'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>usb</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </redirdev>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <channel supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>pty</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>unix</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </channel>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <crypto supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='model'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>qemu</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='backendModel'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>builtin</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </crypto>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <interface supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='backendType'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>default</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>passt</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </interface>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <panic supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='model'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>isa</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>hyperv</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </panic>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <console supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>null</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vc</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>pty</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>dev</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>file</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>pipe</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>stdio</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>udp</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>tcp</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>unix</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>qemu-vdagent</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>dbus</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </console>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   </devices>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <features>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <gic supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <vmcoreinfo supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <genid supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <backingStoreInput supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <backup supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <async-teardown supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <s390-pv supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <ps2 supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <tdx supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <sev supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <sgx supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <hyperv supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='features'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>relaxed</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vapic</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>spinlocks</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vpindex</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>runtime</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>synic</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>stimer</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>reset</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vendor_id</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>frequencies</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>reenlightenment</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>tlbflush</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>ipi</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>avic</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>emsr_bitmap</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>xmm_input</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <defaults>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <spinlocks>4095</spinlocks>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <stimer_direct>on</stimer_direct>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <tlbflush_direct>on</tlbflush_direct>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <tlbflush_extended>on</tlbflush_extended>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </defaults>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </hyperv>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <launchSecurity supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   </features>
Jan 21 23:45:13 compute-0 nova_compute[247516]: </domainCapabilities>
Jan 21 23:45:13 compute-0 nova_compute[247516]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:12.925 247523 DEBUG nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:12.932 247523 DEBUG nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 21 23:45:13 compute-0 nova_compute[247516]: <domainCapabilities>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <path>/usr/libexec/qemu-kvm</path>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <domain>kvm</domain>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <arch>x86_64</arch>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <vcpu max='240'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <iothreads supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <os supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <enum name='firmware'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <loader supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>rom</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>pflash</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='readonly'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>yes</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>no</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='secure'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>no</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </loader>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   </os>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <cpu>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <mode name='host-passthrough' supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='hostPassthroughMigratable'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>on</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>off</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </mode>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <mode name='maximum' supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='maximumMigratable'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>on</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>off</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </mode>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <mode name='host-model' supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <vendor>AMD</vendor>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='x2apic'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='tsc-deadline'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='hypervisor'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='tsc_adjust'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='spec-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='stibp'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='ssbd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='cmp_legacy'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='overflow-recov'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='succor'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='ibrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='amd-ssbd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='virt-ssbd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='lbrv'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='tsc-scale'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='vmcb-clean'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='flushbyasid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='pause-filter'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='pfthreshold'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='svme-addr-chk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='disable' name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </mode>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <mode name='custom' supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Broadwell'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Broadwell-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Broadwell-noTSX'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Broadwell-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Broadwell-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Broadwell-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Broadwell-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v5'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='ClearwaterForest'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bhi-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bhi-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ddpd-u'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='intel-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ipred-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='lam'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rrsba-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sha512'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sm3'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sm4'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='ClearwaterForest-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bhi-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bhi-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ddpd-u'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='intel-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ipred-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='lam'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rrsba-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sha512'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sm3'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sm4'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cooperlake'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cooperlake-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cooperlake-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Denverton'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mpx'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Denverton-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mpx'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Denverton-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Denverton-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Dhyana-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Genoa'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Genoa-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Genoa-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fs-gs-base-ns'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='perfmon-v2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Milan'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Milan-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Milan-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Milan-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Rome'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Rome-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Rome-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Rome-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Turin'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vp2intersect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fs-gs-base-ns'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibpb-brtype'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='perfmon-v2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='prefetchi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbpb'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='srso-user-kernel-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Turin-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vp2intersect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fs-gs-base-ns'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibpb-brtype'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='perfmon-v2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='prefetchi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbpb'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='srso-user-kernel-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-v5'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='GraniteRapids'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='GraniteRapids-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='GraniteRapids-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx10'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx10-128'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx10-256'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx10-512'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='GraniteRapids-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx10'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx10-128'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx10-256'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx10-512'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Haswell'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Haswell-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Haswell-noTSX'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Haswell-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Haswell-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Haswell-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Haswell-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-noTSX'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v5'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v6'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v7'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='IvyBridge'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='IvyBridge-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='IvyBridge-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='IvyBridge-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='KnightsMill'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-4fmaps'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-4vnniw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512er'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512pf'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='KnightsMill-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-4fmaps'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-4vnniw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512er'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512pf'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Opteron_G4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fma4'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xop'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Opteron_G4-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fma4'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xop'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Opteron_G5'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fma4'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tbm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xop'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Opteron_G5-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fma4'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tbm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xop'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SierraForest'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SierraForest-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SierraForest-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bhi-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='intel-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ipred-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='lam'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rrsba-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SierraForest-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bhi-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='intel-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ipred-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='lam'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rrsba-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v5'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Snowridge'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='core-capability'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mpx'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='split-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Snowridge-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='core-capability'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mpx'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='split-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Snowridge-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='core-capability'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='split-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Snowridge-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='core-capability'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='split-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Snowridge-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='athlon'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnow'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnowext'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='athlon-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnow'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnowext'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='core2duo'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='core2duo-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='coreduo'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='coreduo-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='n270'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='n270-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='phenom'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnow'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnowext'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='phenom-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnow'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnowext'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </mode>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   </cpu>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <memoryBacking supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <enum name='sourceType'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <value>file</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <value>anonymous</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <value>memfd</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   </memoryBacking>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <devices>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <disk supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='diskDevice'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>disk</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>cdrom</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>floppy</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>lun</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='bus'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>ide</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>fdc</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>scsi</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>usb</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>sata</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='model'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio-transitional</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio-non-transitional</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </disk>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <graphics supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vnc</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>egl-headless</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>dbus</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </graphics>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <video supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='modelType'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vga</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>cirrus</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>none</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>bochs</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>ramfb</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </video>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <hostdev supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='mode'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>subsystem</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='startupPolicy'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>default</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>mandatory</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>requisite</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>optional</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='subsysType'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>usb</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>pci</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>scsi</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='capsType'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='pciBackend'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </hostdev>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <rng supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='model'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio-transitional</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio-non-transitional</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='backendModel'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>random</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>egd</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>builtin</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </rng>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <filesystem supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='driverType'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>path</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>handle</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtiofs</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </filesystem>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <tpm supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='model'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>tpm-tis</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>tpm-crb</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='backendModel'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>emulator</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>external</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='backendVersion'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>2.0</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </tpm>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <redirdev supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='bus'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>usb</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </redirdev>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <channel supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>pty</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>unix</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </channel>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <crypto supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='model'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>qemu</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='backendModel'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>builtin</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </crypto>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <interface supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='backendType'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>default</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>passt</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </interface>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <panic supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='model'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>isa</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>hyperv</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </panic>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <console supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>null</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vc</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>pty</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>dev</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>file</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>pipe</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>stdio</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>udp</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>tcp</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>unix</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>qemu-vdagent</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>dbus</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </console>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   </devices>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <features>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <gic supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <vmcoreinfo supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <genid supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <backingStoreInput supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <backup supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <async-teardown supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <s390-pv supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <ps2 supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <tdx supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <sev supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <sgx supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <hyperv supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='features'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>relaxed</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vapic</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>spinlocks</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vpindex</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>runtime</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>synic</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>stimer</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>reset</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vendor_id</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>frequencies</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>reenlightenment</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>tlbflush</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>ipi</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>avic</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>emsr_bitmap</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>xmm_input</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <defaults>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <spinlocks>4095</spinlocks>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <stimer_direct>on</stimer_direct>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <tlbflush_direct>on</tlbflush_direct>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <tlbflush_extended>on</tlbflush_extended>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </defaults>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </hyperv>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <launchSecurity supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   </features>
Jan 21 23:45:13 compute-0 nova_compute[247516]: </domainCapabilities>
Jan 21 23:45:13 compute-0 nova_compute[247516]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.030 247523 DEBUG nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 21 23:45:13 compute-0 nova_compute[247516]: <domainCapabilities>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <path>/usr/libexec/qemu-kvm</path>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <domain>kvm</domain>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <arch>x86_64</arch>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <vcpu max='4096'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <iothreads supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <os supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <enum name='firmware'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <value>efi</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <loader supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>rom</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>pflash</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='readonly'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>yes</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>no</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='secure'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>yes</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>no</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </loader>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   </os>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <cpu>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <mode name='host-passthrough' supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='hostPassthroughMigratable'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>on</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>off</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </mode>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <mode name='maximum' supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='maximumMigratable'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>on</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>off</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </mode>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <mode name='host-model' supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <vendor>AMD</vendor>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='x2apic'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='tsc-deadline'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='hypervisor'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='tsc_adjust'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='spec-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='stibp'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='ssbd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='cmp_legacy'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='overflow-recov'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='succor'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='ibrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='amd-ssbd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='virt-ssbd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='lbrv'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='tsc-scale'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='vmcb-clean'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='flushbyasid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='pause-filter'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='pfthreshold'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='svme-addr-chk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <feature policy='disable' name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </mode>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <mode name='custom' supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Broadwell'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Broadwell-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Broadwell-noTSX'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Broadwell-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Broadwell-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Broadwell-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Broadwell-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cascadelake-Server-v5'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='ClearwaterForest'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bhi-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bhi-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ddpd-u'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='intel-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ipred-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='lam'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rrsba-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sha512'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sm3'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sm4'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='ClearwaterForest-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bhi-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bhi-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ddpd-u'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='intel-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ipred-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='lam'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rrsba-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sha512'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sm3'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sm4'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cooperlake'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cooperlake-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Cooperlake-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Denverton'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mpx'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Denverton-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mpx'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Denverton-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Denverton-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Dhyana-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Genoa'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Genoa-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Genoa-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fs-gs-base-ns'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='perfmon-v2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Milan'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Milan-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Milan-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Milan-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Rome'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Rome-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Rome-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Rome-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Turin'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vp2intersect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fs-gs-base-ns'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibpb-brtype'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='perfmon-v2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='prefetchi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbpb'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='srso-user-kernel-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-Turin-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amd-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='auto-ibrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vp2intersect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fs-gs-base-ns'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibpb-brtype'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='no-nested-data-bp'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='null-sel-clr-base'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='perfmon-v2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='prefetchi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbpb'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='srso-user-kernel-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='stibp-always-on'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='EPYC-v5'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='GraniteRapids'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='GraniteRapids-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='GraniteRapids-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx10'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx10-128'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx10-256'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx10-512'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='GraniteRapids-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx10'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx10-128'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx10-256'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx10-512'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='prefetchiti'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Haswell'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Haswell-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Haswell-noTSX'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Haswell-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Haswell-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Haswell-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Haswell-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-noTSX'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v5'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v6'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Icelake-Server-v7'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='IvyBridge'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='IvyBridge-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='IvyBridge-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='IvyBridge-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='KnightsMill'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-4fmaps'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-4vnniw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512er'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512pf'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='KnightsMill-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-4fmaps'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-4vnniw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512er'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512pf'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Opteron_G4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fma4'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xop'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Opteron_G4-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fma4'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xop'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Opteron_G5'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fma4'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tbm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xop'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Opteron_G5-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fma4'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tbm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xop'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SapphireRapids-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='amx-tile'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-bf16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-fp16'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512-vpopcntdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bitalg'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vbmi2'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrc'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fzrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='la57'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='taa-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='tsx-ldtrk'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SierraForest'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SierraForest-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SierraForest-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bhi-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='intel-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ipred-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='lam'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rrsba-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='SierraForest-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ifma'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-ne-convert'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx-vnni-int8'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bhi-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='bus-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cmpccxadd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fbsdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='fsrs'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ibrs-all'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='intel-psfd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ipred-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='lam'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mcdt-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pbrsb-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='psdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rrsba-ctrl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='sbdr-ssdp-no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='serialize'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vaes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='vpclmulqdq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Client-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='hle'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='rtm'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Skylake-Server-v5'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512bw'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512cd'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512dq'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512f'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='avx512vl'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='invpcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pcid'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='pku'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Snowridge'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='core-capability'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mpx'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='split-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Snowridge-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='core-capability'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='mpx'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='split-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Snowridge-v2'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='core-capability'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='split-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Snowridge-v3'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='core-capability'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='split-lock-detect'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='Snowridge-v4'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='cldemote'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='erms'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='gfni'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdir64b'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='movdiri'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='xsaves'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='athlon'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnow'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnowext'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='athlon-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnow'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnowext'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='core2duo'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='core2duo-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='coreduo'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='coreduo-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='n270'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='n270-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='ss'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='phenom'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnow'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnowext'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <blockers model='phenom-v1'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnow'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <feature name='3dnowext'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </blockers>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </mode>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   </cpu>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <memoryBacking supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <enum name='sourceType'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <value>file</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <value>anonymous</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <value>memfd</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   </memoryBacking>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <devices>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <disk supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='diskDevice'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>disk</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>cdrom</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>floppy</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>lun</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='bus'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>fdc</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>scsi</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>usb</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>sata</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='model'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio-transitional</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio-non-transitional</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </disk>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <graphics supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vnc</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>egl-headless</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>dbus</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </graphics>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <video supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='modelType'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vga</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>cirrus</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>none</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>bochs</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>ramfb</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </video>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <hostdev supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='mode'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>subsystem</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='startupPolicy'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>default</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>mandatory</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>requisite</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>optional</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='subsysType'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>usb</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>pci</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>scsi</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='capsType'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='pciBackend'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </hostdev>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <rng supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='model'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio-transitional</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtio-non-transitional</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='backendModel'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>random</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>egd</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>builtin</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </rng>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <filesystem supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='driverType'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>path</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>handle</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>virtiofs</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </filesystem>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <tpm supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='model'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>tpm-tis</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>tpm-crb</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='backendModel'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>emulator</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>external</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='backendVersion'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>2.0</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </tpm>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <redirdev supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='bus'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>usb</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </redirdev>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <channel supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>pty</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>unix</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </channel>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <crypto supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='model'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>qemu</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='backendModel'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>builtin</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </crypto>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <interface supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='backendType'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>default</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>passt</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </interface>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <panic supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='model'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>isa</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>hyperv</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </panic>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <console supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='type'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>null</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vc</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>pty</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>dev</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>file</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>pipe</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>stdio</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>udp</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>tcp</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>unix</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>qemu-vdagent</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>dbus</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </console>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   </devices>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <features>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <gic supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <vmcoreinfo supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <genid supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <backingStoreInput supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <backup supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <async-teardown supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <s390-pv supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <ps2 supported='yes'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <tdx supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <sev supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <sgx supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <hyperv supported='yes'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <enum name='features'>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>relaxed</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vapic</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>spinlocks</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vpindex</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>runtime</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>synic</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>stimer</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>reset</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>vendor_id</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>frequencies</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>reenlightenment</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>tlbflush</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>ipi</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>avic</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>emsr_bitmap</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <value>xmm_input</value>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </enum>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       <defaults>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <spinlocks>4095</spinlocks>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <stimer_direct>on</stimer_direct>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <tlbflush_direct>on</tlbflush_direct>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <tlbflush_extended>on</tlbflush_extended>
Jan 21 23:45:13 compute-0 nova_compute[247516]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 23:45:13 compute-0 nova_compute[247516]:       </defaults>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     </hyperv>
Jan 21 23:45:13 compute-0 nova_compute[247516]:     <launchSecurity supported='no'/>
Jan 21 23:45:13 compute-0 nova_compute[247516]:   </features>
Jan 21 23:45:13 compute-0 nova_compute[247516]: </domainCapabilities>
Jan 21 23:45:13 compute-0 nova_compute[247516]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.134 247523 DEBUG nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.135 247523 DEBUG nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.135 247523 DEBUG nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.141 247523 INFO nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Secure Boot support detected
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.144 247523 INFO nova.virt.libvirt.driver [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.144 247523 INFO nova.virt.libvirt.driver [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.161 247523 DEBUG nova.virt.libvirt.driver [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] cpu compare xml: <cpu match="exact">
Jan 21 23:45:13 compute-0 nova_compute[247516]:   <model>Nehalem</model>
Jan 21 23:45:13 compute-0 nova_compute[247516]: </cpu>
Jan 21 23:45:13 compute-0 nova_compute[247516]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.165 247523 DEBUG nova.virt.libvirt.driver [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.185 247523 INFO nova.virt.node [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Determined node identity c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 from /var/lib/nova/compute_id
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.199 247523 WARNING nova.compute.manager [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Compute nodes ['c0ebcd59-c8be-41e3-9c46-a4b74f020ea8'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.229 247523 INFO nova.compute.manager [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.273 247523 WARNING nova.compute.manager [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.273 247523 DEBUG oslo_concurrency.lockutils [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.273 247523 DEBUG oslo_concurrency.lockutils [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.274 247523 DEBUG oslo_concurrency.lockutils [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.274 247523 DEBUG nova.compute.resource_tracker [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.274 247523 DEBUG oslo_concurrency.processutils [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:45:13 compute-0 ceph-mon[74318]: pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:13.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:45:13 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/177562243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:45:13 compute-0 nova_compute[247516]: 2026-01-21 23:45:13.759 247523 DEBUG oslo_concurrency.processutils [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:45:13 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 21 23:45:13 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 21 23:45:14 compute-0 nova_compute[247516]: 2026-01-21 23:45:14.137 247523 WARNING nova.virt.libvirt.driver [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 23:45:14 compute-0 nova_compute[247516]: 2026-01-21 23:45:14.139 247523 DEBUG nova.compute.resource_tracker [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5150MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 23:45:14 compute-0 nova_compute[247516]: 2026-01-21 23:45:14.139 247523 DEBUG oslo_concurrency.lockutils [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:45:14 compute-0 nova_compute[247516]: 2026-01-21 23:45:14.139 247523 DEBUG oslo_concurrency.lockutils [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:45:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:45:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:14.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:45:14 compute-0 nova_compute[247516]: 2026-01-21 23:45:14.308 247523 WARNING nova.compute.resource_tracker [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] No compute node record for compute-0.ctlplane.example.com:c0ebcd59-c8be-41e3-9c46-a4b74f020ea8: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 could not be found.
Jan 21 23:45:14 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1683397309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:45:14 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/177562243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:45:14 compute-0 nova_compute[247516]: 2026-01-21 23:45:14.358 247523 INFO nova.compute.resource_tracker [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8
Jan 21 23:45:14 compute-0 nova_compute[247516]: 2026-01-21 23:45:14.458 247523 DEBUG nova.compute.resource_tracker [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 23:45:14 compute-0 nova_compute[247516]: 2026-01-21 23:45:14.459 247523 DEBUG nova.compute.resource_tracker [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 23:45:14 compute-0 nova_compute[247516]: 2026-01-21 23:45:14.721 247523 INFO nova.scheduler.client.report [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] [req-deb88fcc-4df8-4b26-94c0-3f0260297edb] Created resource provider record via placement API for resource provider with UUID c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 and name compute-0.ctlplane.example.com.
Jan 21 23:45:14 compute-0 nova_compute[247516]: 2026-01-21 23:45:14.742 247523 DEBUG oslo_concurrency.processutils [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:45:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:45:15 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1266600637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:45:15 compute-0 nova_compute[247516]: 2026-01-21 23:45:15.193 247523 DEBUG oslo_concurrency.processutils [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:45:15 compute-0 nova_compute[247516]: 2026-01-21 23:45:15.201 247523 DEBUG nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 21 23:45:15 compute-0 nova_compute[247516]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 21 23:45:15 compute-0 nova_compute[247516]: 2026-01-21 23:45:15.201 247523 INFO nova.virt.libvirt.host [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] kernel doesn't support AMD SEV
Jan 21 23:45:15 compute-0 nova_compute[247516]: 2026-01-21 23:45:15.202 247523 DEBUG nova.compute.provider_tree [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Updating inventory in ProviderTree for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 21 23:45:15 compute-0 nova_compute[247516]: 2026-01-21 23:45:15.203 247523 DEBUG nova.virt.libvirt.driver [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 21 23:45:15 compute-0 nova_compute[247516]: 2026-01-21 23:45:15.206 247523 DEBUG nova.virt.libvirt.driver [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Libvirt baseline CPU <cpu>
Jan 21 23:45:15 compute-0 nova_compute[247516]:   <arch>x86_64</arch>
Jan 21 23:45:15 compute-0 nova_compute[247516]:   <model>Nehalem</model>
Jan 21 23:45:15 compute-0 nova_compute[247516]:   <vendor>AMD</vendor>
Jan 21 23:45:15 compute-0 nova_compute[247516]:   <topology sockets="8" cores="1" threads="1"/>
Jan 21 23:45:15 compute-0 nova_compute[247516]: </cpu>
Jan 21 23:45:15 compute-0 nova_compute[247516]:  _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537
Jan 21 23:45:15 compute-0 nova_compute[247516]: 2026-01-21 23:45:15.292 247523 DEBUG nova.scheduler.client.report [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Updated inventory for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 21 23:45:15 compute-0 nova_compute[247516]: 2026-01-21 23:45:15.292 247523 DEBUG nova.compute.provider_tree [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Updating resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 21 23:45:15 compute-0 nova_compute[247516]: 2026-01-21 23:45:15.292 247523 DEBUG nova.compute.provider_tree [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Updating inventory in ProviderTree for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 21 23:45:15 compute-0 ceph-mon[74318]: pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:15 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2790905697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:45:15 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/12885516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:45:15 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1266600637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:45:15 compute-0 nova_compute[247516]: 2026-01-21 23:45:15.426 247523 DEBUG nova.compute.provider_tree [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Updating resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 21 23:45:15 compute-0 nova_compute[247516]: 2026-01-21 23:45:15.455 247523 DEBUG nova.compute.resource_tracker [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 23:45:15 compute-0 nova_compute[247516]: 2026-01-21 23:45:15.455 247523 DEBUG oslo_concurrency.lockutils [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:45:15 compute-0 nova_compute[247516]: 2026-01-21 23:45:15.456 247523 DEBUG nova.service [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 21 23:45:15 compute-0 nova_compute[247516]: 2026-01-21 23:45:15.578 247523 DEBUG nova.service [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 21 23:45:15 compute-0 nova_compute[247516]: 2026-01-21 23:45:15.578 247523 DEBUG nova.servicegroup.drivers.db [None req-edb081de-bc67-4a09-b04e-95b4d555ef3d - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 21 23:45:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:15.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:16 compute-0 podman[248287]: 2026-01-21 23:45:16.029103589 +0000 UTC m=+0.120346997 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:45:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:16.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:17 compute-0 ceph-mon[74318]: pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:45:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:45:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:17.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:45:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:45:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:18.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:45:19 compute-0 ceph-mon[74318]: pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:19.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:19 compute-0 ceph-mgr[74614]: [devicehealth INFO root] Check health
Jan 21 23:45:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:20.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:21 compute-0 ceph-mon[74318]: pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:21.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:45:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:22.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:45:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:45:23 compute-0 sudo[248316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:45:23 compute-0 sudo[248316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:45:23 compute-0 sudo[248316]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:23 compute-0 sudo[248341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:45:23 compute-0 sudo[248341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:45:23 compute-0 sudo[248341]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:23 compute-0 ceph-mon[74318]: pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:45:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:23.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:45:23 compute-0 podman[248367]: 2026-01-21 23:45:23.957074095 +0000 UTC m=+0.070582629 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 21 23:45:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:45:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:24.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:45:24 compute-0 ceph-mon[74318]: pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:45:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:25.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:45:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:45:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:26.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:45:27 compute-0 ceph-mon[74318]: pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:45:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:45:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:27.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:45:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:45:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:28.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:45:29 compute-0 ceph-mon[74318]: pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:29.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:45:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:30.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:45:30 compute-0 ceph-mon[74318]: pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:45:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:31.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:45:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:45:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:32.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:45:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:45:33 compute-0 ceph-mon[74318]: pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:33.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:34.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:35 compute-0 ceph-mon[74318]: pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:45:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:35.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:45:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:45:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:36.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:45:37 compute-0 ceph-mon[74318]: pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:45:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:45:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:37.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:45:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:45:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:38.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:45:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 21 23:45:38 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2599403215' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:45:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 21 23:45:38 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2599403215' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:45:39
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'backups', '.rgw.root', 'vms', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data']
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:45:39 compute-0 ceph-mon[74318]: pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:39 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2599403215' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:45:39 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2599403215' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:45:39 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/270025542' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:45:39 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/270025542' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:45:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:45:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 21 23:45:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1705620072' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:45:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 21 23:45:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1705620072' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:45:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:39.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:40.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:40 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1705620072' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:45:40 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1705620072' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:45:41 compute-0 ceph-mon[74318]: pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:41.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:42.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:45:43 compute-0 sudo[248396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:45:43 compute-0 sudo[248396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:45:43 compute-0 sudo[248396]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:43 compute-0 sudo[248421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:45:43 compute-0 sudo[248421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:45:43 compute-0 sudo[248421]: pam_unix(sudo:session): session closed for user root
Jan 21 23:45:43 compute-0 ceph-mon[74318]: pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:45:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:43.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:45:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:44.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:45 compute-0 ceph-mon[74318]: pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:45:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:45.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:45:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:45:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:46.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:45:47 compute-0 podman[248448]: 2026-01-21 23:45:47.009327631 +0000 UTC m=+0.114702669 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:45:47 compute-0 ceph-mon[74318]: pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:45:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:45:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:47.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:45:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:45:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:48.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:45:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:45:48.740 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:45:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:45:48.742 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:45:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:45:48.742 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:45:49 compute-0 ceph-mon[74318]: pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:45:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:49.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:45:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:50.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:50 compute-0 ceph-mon[74318]: pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:51.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:52.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:45:53 compute-0 ceph-mon[74318]: pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:53.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:45:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:54.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:54 compute-0 podman[248478]: 2026-01-21 23:45:54.971461378 +0000 UTC m=+0.088085628 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 21 23:45:55 compute-0 ceph-mon[74318]: pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:55.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:45:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:56.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:45:57 compute-0 ceph-mon[74318]: pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:45:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:45:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:57.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:45:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:45:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:45:58.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:45:59 compute-0 ceph-mon[74318]: pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:45:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:45:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:45:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:45:59.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:00.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:01 compute-0 ceph-mon[74318]: pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:01.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:02.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:46:03 compute-0 ceph-mon[74318]: pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:03 compute-0 sudo[248504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:46:03 compute-0 sudo[248504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:03 compute-0 sudo[248504]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:03 compute-0 sudo[248529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:46:03 compute-0 sudo[248529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:03 compute-0 sudo[248529]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:46:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:03.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:46:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:46:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:04.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:46:04 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1129512731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:46:05 compute-0 ceph-mon[74318]: pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:05 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/82948665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:46:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:05.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:46:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:06.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:06 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/100575325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:46:06 compute-0 sudo[248555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:46:06 compute-0 sudo[248555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:06 compute-0 sudo[248555]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:06 compute-0 sudo[248580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:46:06 compute-0 sudo[248580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:06 compute-0 sudo[248580]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.581 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.583 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.583 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.584 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.600 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.601 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.602 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.602 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.602 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.603 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.603 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.629 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.629 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.629 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:46:06 compute-0 sudo[248605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.652 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.653 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.653 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.653 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 23:46:06 compute-0 nova_compute[247516]: 2026-01-21 23:46:06.654 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:46:06 compute-0 sudo[248605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:06 compute-0 sudo[248605]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:06 compute-0 sudo[248631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:46:06 compute-0 sudo[248631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:46:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:46:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:46:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:46:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:46:07 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/555904195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:46:07 compute-0 nova_compute[247516]: 2026-01-21 23:46:07.166 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:46:07 compute-0 sudo[248631]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 21 23:46:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 21 23:46:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 21 23:46:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 21 23:46:07 compute-0 nova_compute[247516]: 2026-01-21 23:46:07.392 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 23:46:07 compute-0 nova_compute[247516]: 2026-01-21 23:46:07.394 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5223MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 23:46:07 compute-0 nova_compute[247516]: 2026-01-21 23:46:07.395 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:46:07 compute-0 nova_compute[247516]: 2026-01-21 23:46:07.395 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:46:07 compute-0 ceph-mon[74318]: pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:07 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1684013833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:46:07 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:46:07 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:46:07 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/555904195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:46:07 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 21 23:46:07 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 21 23:46:07 compute-0 nova_compute[247516]: 2026-01-21 23:46:07.508 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 23:46:07 compute-0 nova_compute[247516]: 2026-01-21 23:46:07.508 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 23:46:07 compute-0 nova_compute[247516]: 2026-01-21 23:46:07.539 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:46:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:46:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:46:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:07.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:46:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:46:07 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2260020509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:46:08 compute-0 nova_compute[247516]: 2026-01-21 23:46:08.010 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:46:08 compute-0 nova_compute[247516]: 2026-01-21 23:46:08.017 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 23:46:08 compute-0 nova_compute[247516]: 2026-01-21 23:46:08.039 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 23:46:08 compute-0 nova_compute[247516]: 2026-01-21 23:46:08.041 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 23:46:08 compute-0 nova_compute[247516]: 2026-01-21 23:46:08.041 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:46:08 compute-0 nova_compute[247516]: 2026-01-21 23:46:08.042 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:46:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:46:08 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:46:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:46:08 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:46:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:46:08 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:46:08 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 72ec8c95-b07d-443f-8677-230922c694f4 does not exist
Jan 21 23:46:08 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 2abaa7a6-ecd8-4096-a33c-1f68329195b1 does not exist
Jan 21 23:46:08 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev c75c938e-3ccb-4845-9a5c-10e79e07854b does not exist
Jan 21 23:46:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:46:08 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:46:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:46:08 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:46:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:46:08 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:46:08 compute-0 sudo[248730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:46:08 compute-0 sudo[248730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:08 compute-0 sudo[248730]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:08 compute-0 sudo[248755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:46:08 compute-0 sudo[248755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:08 compute-0 sudo[248755]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:08 compute-0 sudo[248780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:46:08 compute-0 sudo[248780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:08 compute-0 sudo[248780]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:08 compute-0 sudo[248805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:46:08 compute-0 sudo[248805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:46:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:08.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:08 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2260020509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:46:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:46:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:46:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:46:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:46:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:46:08 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:46:08 compute-0 podman[248870]: 2026-01-21 23:46:08.746219794 +0000 UTC m=+0.059128111 container create f7e70107d8356354c8e76f7e58f97a108ac68587770415cf395b064293192341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:46:08 compute-0 systemd[1]: Started libpod-conmon-f7e70107d8356354c8e76f7e58f97a108ac68587770415cf395b064293192341.scope.
Jan 21 23:46:08 compute-0 podman[248870]: 2026-01-21 23:46:08.718104924 +0000 UTC m=+0.031013261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:46:08 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:46:08 compute-0 podman[248870]: 2026-01-21 23:46:08.844160608 +0000 UTC m=+0.157069005 container init f7e70107d8356354c8e76f7e58f97a108ac68587770415cf395b064293192341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:46:08 compute-0 podman[248870]: 2026-01-21 23:46:08.852355674 +0000 UTC m=+0.165264001 container start f7e70107d8356354c8e76f7e58f97a108ac68587770415cf395b064293192341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:46:08 compute-0 podman[248870]: 2026-01-21 23:46:08.857615769 +0000 UTC m=+0.170524096 container attach f7e70107d8356354c8e76f7e58f97a108ac68587770415cf395b064293192341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:46:08 compute-0 tender_dubinsky[248884]: 167 167
Jan 21 23:46:08 compute-0 systemd[1]: libpod-f7e70107d8356354c8e76f7e58f97a108ac68587770415cf395b064293192341.scope: Deactivated successfully.
Jan 21 23:46:08 compute-0 podman[248870]: 2026-01-21 23:46:08.861869341 +0000 UTC m=+0.174777669 container died f7e70107d8356354c8e76f7e58f97a108ac68587770415cf395b064293192341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 21 23:46:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-af17f382fa35459dd557ccf3fbf54e8db98337890e3753a82b3abb8480ccc2f3-merged.mount: Deactivated successfully.
Jan 21 23:46:08 compute-0 podman[248870]: 2026-01-21 23:46:08.907480219 +0000 UTC m=+0.220388526 container remove f7e70107d8356354c8e76f7e58f97a108ac68587770415cf395b064293192341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:46:08 compute-0 systemd[1]: libpod-conmon-f7e70107d8356354c8e76f7e58f97a108ac68587770415cf395b064293192341.scope: Deactivated successfully.
Jan 21 23:46:09 compute-0 podman[248911]: 2026-01-21 23:46:09.127995618 +0000 UTC m=+0.062464715 container create 44406fd29d183c820455ecaf8a6e59a07fa21b8d73cd6936ba642ee01a70b64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 21 23:46:09 compute-0 systemd[1]: Started libpod-conmon-44406fd29d183c820455ecaf8a6e59a07fa21b8d73cd6936ba642ee01a70b64c.scope.
Jan 21 23:46:09 compute-0 podman[248911]: 2026-01-21 23:46:09.101050075 +0000 UTC m=+0.035519162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:46:09 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:46:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:46:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:46:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:46:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:46:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:46:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:46:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c921aee71be0854d86b12d7418c0d65a80e4022eafa07a8b55882f5cd4a1225/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:46:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c921aee71be0854d86b12d7418c0d65a80e4022eafa07a8b55882f5cd4a1225/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:46:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c921aee71be0854d86b12d7418c0d65a80e4022eafa07a8b55882f5cd4a1225/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:46:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c921aee71be0854d86b12d7418c0d65a80e4022eafa07a8b55882f5cd4a1225/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:46:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c921aee71be0854d86b12d7418c0d65a80e4022eafa07a8b55882f5cd4a1225/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:46:09 compute-0 podman[248911]: 2026-01-21 23:46:09.260461243 +0000 UTC m=+0.194930390 container init 44406fd29d183c820455ecaf8a6e59a07fa21b8d73cd6936ba642ee01a70b64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_knuth, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:46:09 compute-0 podman[248911]: 2026-01-21 23:46:09.281903843 +0000 UTC m=+0.216372920 container start 44406fd29d183c820455ecaf8a6e59a07fa21b8d73cd6936ba642ee01a70b64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_knuth, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 23:46:09 compute-0 podman[248911]: 2026-01-21 23:46:09.286296631 +0000 UTC m=+0.220765788 container attach 44406fd29d183c820455ecaf8a6e59a07fa21b8d73cd6936ba642ee01a70b64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:46:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:46:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:09.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:09 compute-0 ceph-mon[74318]: pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:10 compute-0 cool_knuth[248927]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:46:10 compute-0 cool_knuth[248927]: --> relative data size: 1.0
Jan 21 23:46:10 compute-0 cool_knuth[248927]: --> All data devices are unavailable
Jan 21 23:46:10 compute-0 systemd[1]: libpod-44406fd29d183c820455ecaf8a6e59a07fa21b8d73cd6936ba642ee01a70b64c.scope: Deactivated successfully.
Jan 21 23:46:10 compute-0 podman[248911]: 2026-01-21 23:46:10.150453928 +0000 UTC m=+1.084922995 container died 44406fd29d183c820455ecaf8a6e59a07fa21b8d73cd6936ba642ee01a70b64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_knuth, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 23:46:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c921aee71be0854d86b12d7418c0d65a80e4022eafa07a8b55882f5cd4a1225-merged.mount: Deactivated successfully.
Jan 21 23:46:10 compute-0 podman[248911]: 2026-01-21 23:46:10.216772253 +0000 UTC m=+1.151241300 container remove 44406fd29d183c820455ecaf8a6e59a07fa21b8d73cd6936ba642ee01a70b64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 23:46:10 compute-0 systemd[1]: libpod-conmon-44406fd29d183c820455ecaf8a6e59a07fa21b8d73cd6936ba642ee01a70b64c.scope: Deactivated successfully.
Jan 21 23:46:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:10 compute-0 sudo[248805]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:10 compute-0 sudo[248954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:46:10 compute-0 sudo[248954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:10 compute-0 sudo[248954]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:10 compute-0 sudo[248979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:46:10 compute-0 sudo[248979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:46:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:10.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:10 compute-0 sudo[248979]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:10 compute-0 sudo[249004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:46:10 compute-0 sudo[249004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:10 compute-0 sudo[249004]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:10 compute-0 sudo[249029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:46:10 compute-0 sudo[249029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:10 compute-0 ceph-mon[74318]: pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:10 compute-0 podman[249092]: 2026-01-21 23:46:10.933384724 +0000 UTC m=+0.050388488 container create c5cf78fc62b1f758a26027cafc665bf3e5257cdb013755e3dd7ff07f78398625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 23:46:10 compute-0 systemd[1]: Started libpod-conmon-c5cf78fc62b1f758a26027cafc665bf3e5257cdb013755e3dd7ff07f78398625.scope.
Jan 21 23:46:11 compute-0 podman[249092]: 2026-01-21 23:46:10.911277442 +0000 UTC m=+0.028281196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:46:11 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:46:11 compute-0 podman[249092]: 2026-01-21 23:46:11.031242275 +0000 UTC m=+0.148246079 container init c5cf78fc62b1f758a26027cafc665bf3e5257cdb013755e3dd7ff07f78398625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 21 23:46:11 compute-0 podman[249092]: 2026-01-21 23:46:11.039329069 +0000 UTC m=+0.156332793 container start c5cf78fc62b1f758a26027cafc665bf3e5257cdb013755e3dd7ff07f78398625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 21 23:46:11 compute-0 podman[249092]: 2026-01-21 23:46:11.043095536 +0000 UTC m=+0.160099350 container attach c5cf78fc62b1f758a26027cafc665bf3e5257cdb013755e3dd7ff07f78398625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 21 23:46:11 compute-0 wonderful_greider[249109]: 167 167
Jan 21 23:46:11 compute-0 systemd[1]: libpod-c5cf78fc62b1f758a26027cafc665bf3e5257cdb013755e3dd7ff07f78398625.scope: Deactivated successfully.
Jan 21 23:46:11 compute-0 podman[249092]: 2026-01-21 23:46:11.04864377 +0000 UTC m=+0.165647524 container died c5cf78fc62b1f758a26027cafc665bf3e5257cdb013755e3dd7ff07f78398625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:46:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-50bdb0c602457b859661f1504cbfb74275d8740aadd98329a121d197d6946784-merged.mount: Deactivated successfully.
Jan 21 23:46:11 compute-0 podman[249092]: 2026-01-21 23:46:11.094712382 +0000 UTC m=+0.211716106 container remove c5cf78fc62b1f758a26027cafc665bf3e5257cdb013755e3dd7ff07f78398625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:46:11 compute-0 systemd[1]: libpod-conmon-c5cf78fc62b1f758a26027cafc665bf3e5257cdb013755e3dd7ff07f78398625.scope: Deactivated successfully.
Jan 21 23:46:11 compute-0 podman[249132]: 2026-01-21 23:46:11.321774596 +0000 UTC m=+0.070714154 container create 24d6202acf2cbb2468df8ee6dc37fe7246cd9e563937e58652ce07f2c6d4486e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_nightingale, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 21 23:46:11 compute-0 systemd[1]: Started libpod-conmon-24d6202acf2cbb2468df8ee6dc37fe7246cd9e563937e58652ce07f2c6d4486e.scope.
Jan 21 23:46:11 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:46:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761d5b35187a3c7be1157dc09ab4ffc8bbdca1db91ddd7bb5ad0608760ae7092/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:46:11 compute-0 podman[249132]: 2026-01-21 23:46:11.292401087 +0000 UTC m=+0.041340685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:46:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761d5b35187a3c7be1157dc09ab4ffc8bbdca1db91ddd7bb5ad0608760ae7092/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:46:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761d5b35187a3c7be1157dc09ab4ffc8bbdca1db91ddd7bb5ad0608760ae7092/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:46:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761d5b35187a3c7be1157dc09ab4ffc8bbdca1db91ddd7bb5ad0608760ae7092/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:46:11 compute-0 podman[249132]: 2026-01-21 23:46:11.400947043 +0000 UTC m=+0.149886641 container init 24d6202acf2cbb2468df8ee6dc37fe7246cd9e563937e58652ce07f2c6d4486e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 21 23:46:11 compute-0 podman[249132]: 2026-01-21 23:46:11.411816233 +0000 UTC m=+0.160755791 container start 24d6202acf2cbb2468df8ee6dc37fe7246cd9e563937e58652ce07f2c6d4486e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_nightingale, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:46:11 compute-0 podman[249132]: 2026-01-21 23:46:11.415461957 +0000 UTC m=+0.164401585 container attach 24d6202acf2cbb2468df8ee6dc37fe7246cd9e563937e58652ce07f2c6d4486e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_nightingale, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 21 23:46:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:11.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]: {
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:     "1": [
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:         {
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:             "devices": [
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:                 "/dev/loop3"
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:             ],
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:             "lv_name": "ceph_lv0",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:             "lv_size": "7511998464",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:             "name": "ceph_lv0",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:             "tags": {
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:                 "ceph.cluster_name": "ceph",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:                 "ceph.crush_device_class": "",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:                 "ceph.encrypted": "0",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:                 "ceph.osd_id": "1",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:                 "ceph.type": "block",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:                 "ceph.vdo": "0"
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:             },
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:             "type": "block",
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:             "vg_name": "ceph_vg0"
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:         }
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]:     ]
Jan 21 23:46:12 compute-0 peaceful_nightingale[249150]: }
Jan 21 23:46:12 compute-0 systemd[1]: libpod-24d6202acf2cbb2468df8ee6dc37fe7246cd9e563937e58652ce07f2c6d4486e.scope: Deactivated successfully.
Jan 21 23:46:12 compute-0 podman[249132]: 2026-01-21 23:46:12.201694936 +0000 UTC m=+0.950634504 container died 24d6202acf2cbb2468df8ee6dc37fe7246cd9e563937e58652ce07f2c6d4486e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_nightingale, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 21 23:46:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-761d5b35187a3c7be1157dc09ab4ffc8bbdca1db91ddd7bb5ad0608760ae7092-merged.mount: Deactivated successfully.
Jan 21 23:46:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:12 compute-0 podman[249132]: 2026-01-21 23:46:12.279305815 +0000 UTC m=+1.028245363 container remove 24d6202acf2cbb2468df8ee6dc37fe7246cd9e563937e58652ce07f2c6d4486e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_nightingale, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:46:12 compute-0 systemd[1]: libpod-conmon-24d6202acf2cbb2468df8ee6dc37fe7246cd9e563937e58652ce07f2c6d4486e.scope: Deactivated successfully.
Jan 21 23:46:12 compute-0 sudo[249029]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:46:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:12.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:46:12 compute-0 sudo[249171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:46:12 compute-0 sudo[249171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:12 compute-0 sudo[249171]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:12 compute-0 sudo[249196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:46:12 compute-0 sudo[249196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:12 compute-0 sudo[249196]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:46:12 compute-0 sudo[249221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:46:12 compute-0 sudo[249221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:12 compute-0 sudo[249221]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:12 compute-0 sudo[249246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:46:12 compute-0 sudo[249246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:13 compute-0 podman[249311]: 2026-01-21 23:46:13.104279236 +0000 UTC m=+0.060230936 container create 716ab4dffdbcb75c96bf3162bde0a4061f9e3aa13ce1c8b1ee9295dfc8afc537 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_nobel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:46:13 compute-0 systemd[1]: Started libpod-conmon-716ab4dffdbcb75c96bf3162bde0a4061f9e3aa13ce1c8b1ee9295dfc8afc537.scope.
Jan 21 23:46:13 compute-0 podman[249311]: 2026-01-21 23:46:13.076726764 +0000 UTC m=+0.032678504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:46:13 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:46:13 compute-0 podman[249311]: 2026-01-21 23:46:13.195920513 +0000 UTC m=+0.151872183 container init 716ab4dffdbcb75c96bf3162bde0a4061f9e3aa13ce1c8b1ee9295dfc8afc537 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 21 23:46:13 compute-0 podman[249311]: 2026-01-21 23:46:13.205854434 +0000 UTC m=+0.161806094 container start 716ab4dffdbcb75c96bf3162bde0a4061f9e3aa13ce1c8b1ee9295dfc8afc537 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:46:13 compute-0 podman[249311]: 2026-01-21 23:46:13.209665613 +0000 UTC m=+0.165617273 container attach 716ab4dffdbcb75c96bf3162bde0a4061f9e3aa13ce1c8b1ee9295dfc8afc537 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 21 23:46:13 compute-0 mystifying_nobel[249327]: 167 167
Jan 21 23:46:13 compute-0 systemd[1]: libpod-716ab4dffdbcb75c96bf3162bde0a4061f9e3aa13ce1c8b1ee9295dfc8afc537.scope: Deactivated successfully.
Jan 21 23:46:13 compute-0 podman[249311]: 2026-01-21 23:46:13.21275708 +0000 UTC m=+0.168708770 container died 716ab4dffdbcb75c96bf3162bde0a4061f9e3aa13ce1c8b1ee9295dfc8afc537 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 21 23:46:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-92557170fd596aeb001d14cfb61cce470fa6b1bd633cacb40653419766b903b0-merged.mount: Deactivated successfully.
Jan 21 23:46:13 compute-0 podman[249311]: 2026-01-21 23:46:13.269069272 +0000 UTC m=+0.225020922 container remove 716ab4dffdbcb75c96bf3162bde0a4061f9e3aa13ce1c8b1ee9295dfc8afc537 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_nobel, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:46:13 compute-0 systemd[1]: libpod-conmon-716ab4dffdbcb75c96bf3162bde0a4061f9e3aa13ce1c8b1ee9295dfc8afc537.scope: Deactivated successfully.
Jan 21 23:46:13 compute-0 ceph-mon[74318]: pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:13 compute-0 podman[249353]: 2026-01-21 23:46:13.520412795 +0000 UTC m=+0.072636173 container create 31c23c9b3a81980498f2e6063f2a8662b1547836d67694ee04cde5f0de27e411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:46:13 compute-0 systemd[1]: Started libpod-conmon-31c23c9b3a81980498f2e6063f2a8662b1547836d67694ee04cde5f0de27e411.scope.
Jan 21 23:46:13 compute-0 podman[249353]: 2026-01-21 23:46:13.4901907 +0000 UTC m=+0.042414138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:46:13 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fc34649188dfeab4bc3b5ebc3ea5aa30231470807fae40f5a2ac46ed90ac98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fc34649188dfeab4bc3b5ebc3ea5aa30231470807fae40f5a2ac46ed90ac98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fc34649188dfeab4bc3b5ebc3ea5aa30231470807fae40f5a2ac46ed90ac98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fc34649188dfeab4bc3b5ebc3ea5aa30231470807fae40f5a2ac46ed90ac98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:46:13 compute-0 podman[249353]: 2026-01-21 23:46:13.628231049 +0000 UTC m=+0.180454467 container init 31c23c9b3a81980498f2e6063f2a8662b1547836d67694ee04cde5f0de27e411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:46:13 compute-0 podman[249353]: 2026-01-21 23:46:13.64042436 +0000 UTC m=+0.192647728 container start 31c23c9b3a81980498f2e6063f2a8662b1547836d67694ee04cde5f0de27e411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:46:13 compute-0 podman[249353]: 2026-01-21 23:46:13.644829758 +0000 UTC m=+0.197053136 container attach 31c23c9b3a81980498f2e6063f2a8662b1547836d67694ee04cde5f0de27e411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 23:46:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:46:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:13.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:46:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:14.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:46:14 compute-0 pensive_lewin[249369]: {
Jan 21 23:46:14 compute-0 pensive_lewin[249369]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:46:14 compute-0 pensive_lewin[249369]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:46:14 compute-0 pensive_lewin[249369]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:46:14 compute-0 pensive_lewin[249369]:         "osd_id": 1,
Jan 21 23:46:14 compute-0 pensive_lewin[249369]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:46:14 compute-0 pensive_lewin[249369]:         "type": "bluestore"
Jan 21 23:46:14 compute-0 pensive_lewin[249369]:     }
Jan 21 23:46:14 compute-0 pensive_lewin[249369]: }
Jan 21 23:46:14 compute-0 systemd[1]: libpod-31c23c9b3a81980498f2e6063f2a8662b1547836d67694ee04cde5f0de27e411.scope: Deactivated successfully.
Jan 21 23:46:14 compute-0 podman[249353]: 2026-01-21 23:46:14.568462196 +0000 UTC m=+1.120685564 container died 31c23c9b3a81980498f2e6063f2a8662b1547836d67694ee04cde5f0de27e411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:46:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-47fc34649188dfeab4bc3b5ebc3ea5aa30231470807fae40f5a2ac46ed90ac98-merged.mount: Deactivated successfully.
Jan 21 23:46:14 compute-0 podman[249353]: 2026-01-21 23:46:14.676891739 +0000 UTC m=+1.229115077 container remove 31c23c9b3a81980498f2e6063f2a8662b1547836d67694ee04cde5f0de27e411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 23:46:14 compute-0 systemd[1]: libpod-conmon-31c23c9b3a81980498f2e6063f2a8662b1547836d67694ee04cde5f0de27e411.scope: Deactivated successfully.
Jan 21 23:46:14 compute-0 sudo[249246]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:46:14 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:46:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:46:14 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:46:14 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 20a30050-fbc5-478e-99c8-966617bc7848 does not exist
Jan 21 23:46:14 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev f6c397f8-9610-4c74-b0f1-5d6870b1e854 does not exist
Jan 21 23:46:14 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev dd170c4f-b202-4f96-8dbb-13dcb80e6126 does not exist
Jan 21 23:46:14 compute-0 sudo[249402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:46:14 compute-0 sudo[249402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:14 compute-0 sudo[249402]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:14 compute-0 sudo[249427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:46:14 compute-0 sudo[249427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:14 compute-0 sudo[249427]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:15 compute-0 ceph-mon[74318]: pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:15 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:46:15 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:46:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:46:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:15.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:16.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:17 compute-0 ceph-mon[74318]: pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:46:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:17.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:18 compute-0 podman[249454]: 2026-01-21 23:46:18.052013259 +0000 UTC m=+0.147100444 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 23:46:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:18.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:19 compute-0 ceph-mon[74318]: pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:46:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:19.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:46:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:20.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:46:21 compute-0 ceph-mon[74318]: pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:21.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:22.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:46:23 compute-0 ceph-mon[74318]: pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:23 compute-0 sudo[249484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:46:23 compute-0 sudo[249484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:23 compute-0 sudo[249484]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:23 compute-0 sudo[249509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:46:23 compute-0 sudo[249509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:23 compute-0 sudo[249509]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:46:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:23.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:24.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:25 compute-0 ceph-mon[74318]: pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:25.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:25 compute-0 podman[249535]: 2026-01-21 23:46:25.968515075 +0000 UTC m=+0.079692325 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 21 23:46:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:46:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:26.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:27 compute-0 ceph-mon[74318]: pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:46:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:46:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:27.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:28.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:28 compute-0 ceph-mon[74318]: pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:29.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:30.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:31 compute-0 ceph-mon[74318]: pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:31.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:46:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:32.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:46:33 compute-0 ceph-mon[74318]: pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:33.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:34.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:35 compute-0 ceph-mon[74318]: pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:46:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:35.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:36.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:37 compute-0 ceph-mon[74318]: pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:46:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:37.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:46:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:38.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:46:39
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'volumes', 'default.rgw.meta', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:46:39 compute-0 ceph-mon[74318]: pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:46:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:46:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:46:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:39.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:46:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:40.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:41 compute-0 ceph-mon[74318]: pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:41.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:46:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:42.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:46:43 compute-0 ceph-mon[74318]: pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:43 compute-0 sudo[249563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:46:43 compute-0 sudo[249563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:43 compute-0 sudo[249563]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:46:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:43.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:43 compute-0 sudo[249588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:46:43 compute-0 sudo[249588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:46:43 compute-0 sudo[249588]: pam_unix(sudo:session): session closed for user root
Jan 21 23:46:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:44.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:45 compute-0 ceph-mon[74318]: pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:46:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:45.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:46:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:46.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:46:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 21 23:46:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 21 23:46:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 21 23:46:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 21 23:46:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 21 23:46:47 compute-0 ceph-mon[74318]: pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:46:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:47.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:48.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:46:48.742 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:46:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:46:48.744 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:46:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:46:48.745 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:46:49 compute-0 podman[249615]: 2026-01-21 23:46:49.028523933 +0000 UTC m=+0.133004793 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 23:46:49 compute-0 ceph-mon[74318]: pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:46:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:49.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 71 KiB/s rd, 0 B/s wr, 118 op/s
Jan 21 23:46:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:50.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:50 compute-0 ceph-mon[74318]: pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 71 KiB/s rd, 0 B/s wr, 118 op/s
Jan 21 23:46:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:51.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 167 op/s
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:46:52.340745) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039212340797, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2108, "num_deletes": 251, "total_data_size": 4067062, "memory_usage": 4118600, "flush_reason": "Manual Compaction"}
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039212365757, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 3979576, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17262, "largest_seqno": 19368, "table_properties": {"data_size": 3969980, "index_size": 6154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18778, "raw_average_key_size": 19, "raw_value_size": 3951036, "raw_average_value_size": 4207, "num_data_blocks": 275, "num_entries": 939, "num_filter_entries": 939, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769038983, "oldest_key_time": 1769038983, "file_creation_time": 1769039212, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 25080 microseconds, and 9867 cpu microseconds.
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:46:52.365826) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 3979576 bytes OK
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:46:52.365851) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:46:52.375251) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:46:52.375284) EVENT_LOG_v1 {"time_micros": 1769039212375274, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:46:52.375311) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 4058631, prev total WAL file size 4058631, number of live WAL files 2.
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:46:52.377411) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(3886KB)], [41(7564KB)]
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039212377588, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 11726015, "oldest_snapshot_seqno": -1}
Jan 21 23:46:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:52.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4480 keys, 9696533 bytes, temperature: kUnknown
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039212470762, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 9696533, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9663867, "index_size": 20396, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 111923, "raw_average_key_size": 24, "raw_value_size": 9579956, "raw_average_value_size": 2138, "num_data_blocks": 846, "num_entries": 4480, "num_filter_entries": 4480, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769039212, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:46:52.471053) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9696533 bytes
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:46:52.473184) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.8 rd, 104.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 7.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(5.4) write-amplify(2.4) OK, records in: 4999, records dropped: 519 output_compression: NoCompression
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:46:52.473214) EVENT_LOG_v1 {"time_micros": 1769039212473201, "job": 20, "event": "compaction_finished", "compaction_time_micros": 93248, "compaction_time_cpu_micros": 43416, "output_level": 6, "num_output_files": 1, "total_output_size": 9696533, "num_input_records": 4999, "num_output_records": 4480, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039212474633, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039212477191, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:46:52.377314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:46:52.477389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:46:52.477393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:46:52.477395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:46:52.477397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:46:52 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:46:52.477399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:46:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:46:53 compute-0 ceph-mon[74318]: pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 167 op/s
Jan 21 23:46:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:53.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:46:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 167 op/s
Jan 21 23:46:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:54.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:55 compute-0 ceph-mon[74318]: pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 167 op/s
Jan 21 23:46:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:55.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 104 KiB/s rd, 0 B/s wr, 174 op/s
Jan 21 23:46:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:56.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:56 compute-0 podman[249645]: 2026-01-21 23:46:56.966403178 +0000 UTC m=+0.069090387 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 21 23:46:57 compute-0 ceph-mon[74318]: pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 104 KiB/s rd, 0 B/s wr, 174 op/s
Jan 21 23:46:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:46:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:57.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 104 KiB/s rd, 0 B/s wr, 174 op/s
Jan 21 23:46:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:46:58.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:46:59 compute-0 ceph-mon[74318]: pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 104 KiB/s rd, 0 B/s wr, 174 op/s
Jan 21 23:46:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:46:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:46:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:46:59.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 104 KiB/s rd, 0 B/s wr, 174 op/s
Jan 21 23:47:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:47:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:00.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:47:01 compute-0 ceph-mon[74318]: pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 104 KiB/s rd, 0 B/s wr, 174 op/s
Jan 21 23:47:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:01.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 55 op/s
Jan 21 23:47:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:02.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:47:03 compute-0 ceph-mon[74318]: pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 55 op/s
Jan 21 23:47:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:03.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:03 compute-0 sudo[249668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:47:03 compute-0 sudo[249668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:03 compute-0 sudo[249668]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:04 compute-0 sudo[249693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:47:04 compute-0 sudo[249693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:04 compute-0 sudo[249693]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 0 B/s wr, 6 op/s
Jan 21 23:47:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:47:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:04.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:47:05 compute-0 ceph-mon[74318]: pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 0 B/s wr, 6 op/s
Jan 21 23:47:05 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/950794958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:47:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:05.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 0 B/s wr, 6 op/s
Jan 21 23:47:06 compute-0 nova_compute[247516]: 2026-01-21 23:47:06.449 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:47:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:06.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:06 compute-0 nova_compute[247516]: 2026-01-21 23:47:06.471 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:47:06 compute-0 nova_compute[247516]: 2026-01-21 23:47:06.472 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 23:47:06 compute-0 nova_compute[247516]: 2026-01-21 23:47:06.472 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 23:47:06 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2223904561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:47:06 compute-0 nova_compute[247516]: 2026-01-21 23:47:06.485 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 23:47:06 compute-0 nova_compute[247516]: 2026-01-21 23:47:06.485 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:47:06 compute-0 nova_compute[247516]: 2026-01-21 23:47:06.485 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:47:06 compute-0 nova_compute[247516]: 2026-01-21 23:47:06.485 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:47:06 compute-0 nova_compute[247516]: 2026-01-21 23:47:06.486 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:47:06 compute-0 nova_compute[247516]: 2026-01-21 23:47:06.486 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:47:06 compute-0 nova_compute[247516]: 2026-01-21 23:47:06.486 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:47:06 compute-0 nova_compute[247516]: 2026-01-21 23:47:06.486 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 23:47:06 compute-0 nova_compute[247516]: 2026-01-21 23:47:06.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:47:06 compute-0 nova_compute[247516]: 2026-01-21 23:47:06.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:47:07 compute-0 nova_compute[247516]: 2026-01-21 23:47:07.025 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:47:07 compute-0 nova_compute[247516]: 2026-01-21 23:47:07.026 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:47:07 compute-0 nova_compute[247516]: 2026-01-21 23:47:07.026 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:47:07 compute-0 nova_compute[247516]: 2026-01-21 23:47:07.027 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 23:47:07 compute-0 nova_compute[247516]: 2026-01-21 23:47:07.027 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:47:07 compute-0 ceph-mon[74318]: pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 0 B/s wr, 6 op/s
Jan 21 23:47:07 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1297667264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:47:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:47:07 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3746658542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:47:07 compute-0 nova_compute[247516]: 2026-01-21 23:47:07.533 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:47:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:47:07 compute-0 nova_compute[247516]: 2026-01-21 23:47:07.781 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 23:47:07 compute-0 nova_compute[247516]: 2026-01-21 23:47:07.784 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5257MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 23:47:07 compute-0 nova_compute[247516]: 2026-01-21 23:47:07.784 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:47:07 compute-0 nova_compute[247516]: 2026-01-21 23:47:07.785 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:47:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:07.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:07 compute-0 nova_compute[247516]: 2026-01-21 23:47:07.891 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 23:47:07 compute-0 nova_compute[247516]: 2026-01-21 23:47:07.891 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 23:47:07 compute-0 nova_compute[247516]: 2026-01-21 23:47:07.911 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:47:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:47:08 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/512034983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:47:08 compute-0 nova_compute[247516]: 2026-01-21 23:47:08.383 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:47:08 compute-0 nova_compute[247516]: 2026-01-21 23:47:08.389 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 23:47:08 compute-0 nova_compute[247516]: 2026-01-21 23:47:08.409 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 23:47:08 compute-0 nova_compute[247516]: 2026-01-21 23:47:08.412 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 23:47:08 compute-0 nova_compute[247516]: 2026-01-21 23:47:08.412 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:47:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:47:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:08.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:47:08 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3746658542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:47:08 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/723373237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:47:08 compute-0 ceph-mon[74318]: pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:08 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/512034983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:47:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:47:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:47:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:47:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:47:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:47:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:47:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:09.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:10.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:10 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:47:10.507 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 23:47:10 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:47:10.510 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 23:47:10 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:47:10.513 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 23:47:11 compute-0 ceph-mon[74318]: pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:11.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:12.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:47:13 compute-0 ceph-mon[74318]: pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:13.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:14.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:15 compute-0 ceph-mon[74318]: pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:15 compute-0 sudo[249767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:47:15 compute-0 sudo[249767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:15 compute-0 sudo[249767]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:15 compute-0 sudo[249793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:47:15 compute-0 sudo[249793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:15 compute-0 sudo[249793]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:15 compute-0 sudo[249818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:47:15 compute-0 sudo[249818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:15 compute-0 sudo[249818]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:15 compute-0 sudo[249843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:47:15 compute-0 sudo[249843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:15.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:16 compute-0 sudo[249843]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 21 23:47:16 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 23:47:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:47:16 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:47:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:47:16 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:47:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:47:16 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:47:16 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev b5b6465c-f7e8-437b-a6b5-7f2d9ebf3def does not exist
Jan 21 23:47:16 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 2ac7facc-8d68-4e11-b552-b210cb2e0f83 does not exist
Jan 21 23:47:16 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev a33bd1fb-c118-4dc0-8bc3-c1a2c0f5bf46 does not exist
Jan 21 23:47:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:47:16 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:47:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:47:16 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:47:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:47:16 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:47:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 23:47:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:47:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:47:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:47:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:47:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:47:16 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:47:16 compute-0 sudo[249898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:47:16 compute-0 sudo[249898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:16 compute-0 sudo[249898]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:16 compute-0 sudo[249923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:47:16 compute-0 sudo[249923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:16 compute-0 sudo[249923]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:16.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:16 compute-0 sudo[249948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:47:16 compute-0 sudo[249948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:16 compute-0 sudo[249948]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:16 compute-0 sudo[249973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:47:16 compute-0 sudo[249973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:17 compute-0 podman[250039]: 2026-01-21 23:47:17.077437148 +0000 UTC m=+0.051510616 container create 28de56faebdbe7362070402c99e089b4a55b0650916f7d80a16fafcea1602904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:47:17 compute-0 systemd[1]: Started libpod-conmon-28de56faebdbe7362070402c99e089b4a55b0650916f7d80a16fafcea1602904.scope.
Jan 21 23:47:17 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:47:17 compute-0 podman[250039]: 2026-01-21 23:47:17.142726294 +0000 UTC m=+0.116799802 container init 28de56faebdbe7362070402c99e089b4a55b0650916f7d80a16fafcea1602904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 21 23:47:17 compute-0 podman[250039]: 2026-01-21 23:47:17.049778565 +0000 UTC m=+0.023852053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:47:17 compute-0 podman[250039]: 2026-01-21 23:47:17.149290593 +0000 UTC m=+0.123364061 container start 28de56faebdbe7362070402c99e089b4a55b0650916f7d80a16fafcea1602904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 23:47:17 compute-0 podman[250039]: 2026-01-21 23:47:17.152461614 +0000 UTC m=+0.126535092 container attach 28de56faebdbe7362070402c99e089b4a55b0650916f7d80a16fafcea1602904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:47:17 compute-0 eloquent_mirzakhani[250055]: 167 167
Jan 21 23:47:17 compute-0 systemd[1]: libpod-28de56faebdbe7362070402c99e089b4a55b0650916f7d80a16fafcea1602904.scope: Deactivated successfully.
Jan 21 23:47:17 compute-0 podman[250039]: 2026-01-21 23:47:17.157261647 +0000 UTC m=+0.131335115 container died 28de56faebdbe7362070402c99e089b4a55b0650916f7d80a16fafcea1602904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 23:47:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca371532f2ed144a3b7e1a73da43ccbb961f9e676b428b6ba865fa62aa6c70de-merged.mount: Deactivated successfully.
Jan 21 23:47:17 compute-0 podman[250039]: 2026-01-21 23:47:17.201862001 +0000 UTC m=+0.175935479 container remove 28de56faebdbe7362070402c99e089b4a55b0650916f7d80a16fafcea1602904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 21 23:47:17 compute-0 systemd[1]: libpod-conmon-28de56faebdbe7362070402c99e089b4a55b0650916f7d80a16fafcea1602904.scope: Deactivated successfully.
Jan 21 23:47:17 compute-0 podman[250080]: 2026-01-21 23:47:17.367913015 +0000 UTC m=+0.059514222 container create 89e2cdff399265b2715fedb3a97c76046ad4cd0c81d02e91fcf59d8e50b35d03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 21 23:47:17 compute-0 ceph-mon[74318]: pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:17 compute-0 systemd[1]: Started libpod-conmon-89e2cdff399265b2715fedb3a97c76046ad4cd0c81d02e91fcf59d8e50b35d03.scope.
Jan 21 23:47:17 compute-0 podman[250080]: 2026-01-21 23:47:17.338704773 +0000 UTC m=+0.030305990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:47:17 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004ba8cb6c1b5cd0291252b8c0fd09bffbd85f0418001f120ee5f111c754d40b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004ba8cb6c1b5cd0291252b8c0fd09bffbd85f0418001f120ee5f111c754d40b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004ba8cb6c1b5cd0291252b8c0fd09bffbd85f0418001f120ee5f111c754d40b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004ba8cb6c1b5cd0291252b8c0fd09bffbd85f0418001f120ee5f111c754d40b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004ba8cb6c1b5cd0291252b8c0fd09bffbd85f0418001f120ee5f111c754d40b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:47:17 compute-0 podman[250080]: 2026-01-21 23:47:17.450820632 +0000 UTC m=+0.142421819 container init 89e2cdff399265b2715fedb3a97c76046ad4cd0c81d02e91fcf59d8e50b35d03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 21 23:47:17 compute-0 podman[250080]: 2026-01-21 23:47:17.468715514 +0000 UTC m=+0.160316721 container start 89e2cdff399265b2715fedb3a97c76046ad4cd0c81d02e91fcf59d8e50b35d03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:47:17 compute-0 podman[250080]: 2026-01-21 23:47:17.473430674 +0000 UTC m=+0.165031941 container attach 89e2cdff399265b2715fedb3a97c76046ad4cd0c81d02e91fcf59d8e50b35d03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 21 23:47:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:47:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:17.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:18 compute-0 flamboyant_maxwell[250097]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:47:18 compute-0 flamboyant_maxwell[250097]: --> relative data size: 1.0
Jan 21 23:47:18 compute-0 flamboyant_maxwell[250097]: --> All data devices are unavailable
Jan 21 23:47:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:18 compute-0 systemd[1]: libpod-89e2cdff399265b2715fedb3a97c76046ad4cd0c81d02e91fcf59d8e50b35d03.scope: Deactivated successfully.
Jan 21 23:47:18 compute-0 podman[250080]: 2026-01-21 23:47:18.283506776 +0000 UTC m=+0.975107983 container died 89e2cdff399265b2715fedb3a97c76046ad4cd0c81d02e91fcf59d8e50b35d03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:47:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-004ba8cb6c1b5cd0291252b8c0fd09bffbd85f0418001f120ee5f111c754d40b-merged.mount: Deactivated successfully.
Jan 21 23:47:18 compute-0 podman[250080]: 2026-01-21 23:47:18.370029589 +0000 UTC m=+1.061630776 container remove 89e2cdff399265b2715fedb3a97c76046ad4cd0c81d02e91fcf59d8e50b35d03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 21 23:47:18 compute-0 systemd[1]: libpod-conmon-89e2cdff399265b2715fedb3a97c76046ad4cd0c81d02e91fcf59d8e50b35d03.scope: Deactivated successfully.
Jan 21 23:47:18 compute-0 sudo[249973]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:18.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:18 compute-0 sudo[250126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:47:18 compute-0 sudo[250126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:18 compute-0 sudo[250126]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:18 compute-0 sudo[250151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:47:18 compute-0 sudo[250151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:18 compute-0 sudo[250151]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:18 compute-0 sudo[250176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:47:18 compute-0 sudo[250176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:18 compute-0 sudo[250176]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:18 compute-0 sudo[250201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:47:18 compute-0 sudo[250201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:19 compute-0 podman[250267]: 2026-01-21 23:47:19.178723536 +0000 UTC m=+0.058017994 container create 0f02bf92120e2c8268f3171d429dd142b0622365d2fa713d2076392039b5c062 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:47:19 compute-0 systemd[1]: Started libpod-conmon-0f02bf92120e2c8268f3171d429dd142b0622365d2fa713d2076392039b5c062.scope.
Jan 21 23:47:19 compute-0 podman[250267]: 2026-01-21 23:47:19.151174916 +0000 UTC m=+0.030469434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:47:19 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:47:19 compute-0 podman[250267]: 2026-01-21 23:47:19.27214489 +0000 UTC m=+0.151439418 container init 0f02bf92120e2c8268f3171d429dd142b0622365d2fa713d2076392039b5c062 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 21 23:47:19 compute-0 podman[250267]: 2026-01-21 23:47:19.284195255 +0000 UTC m=+0.163489713 container start 0f02bf92120e2c8268f3171d429dd142b0622365d2fa713d2076392039b5c062 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:47:19 compute-0 podman[250267]: 2026-01-21 23:47:19.288800472 +0000 UTC m=+0.168094940 container attach 0f02bf92120e2c8268f3171d429dd142b0622365d2fa713d2076392039b5c062 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:47:19 compute-0 jolly_brattain[250285]: 167 167
Jan 21 23:47:19 compute-0 systemd[1]: libpod-0f02bf92120e2c8268f3171d429dd142b0622365d2fa713d2076392039b5c062.scope: Deactivated successfully.
Jan 21 23:47:19 compute-0 podman[250267]: 2026-01-21 23:47:19.292770009 +0000 UTC m=+0.172064467 container died 0f02bf92120e2c8268f3171d429dd142b0622365d2fa713d2076392039b5c062 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:47:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5dbce3f97dce675b95a637075437af555cd26f6599009b2968a693ee3f1f0e9-merged.mount: Deactivated successfully.
Jan 21 23:47:19 compute-0 podman[250267]: 2026-01-21 23:47:19.342937841 +0000 UTC m=+0.222232259 container remove 0f02bf92120e2c8268f3171d429dd142b0622365d2fa713d2076392039b5c062 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 21 23:47:19 compute-0 systemd[1]: libpod-conmon-0f02bf92120e2c8268f3171d429dd142b0622365d2fa713d2076392039b5c062.scope: Deactivated successfully.
Jan 21 23:47:19 compute-0 podman[250282]: 2026-01-21 23:47:19.394546049 +0000 UTC m=+0.163583515 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Jan 21 23:47:19 compute-0 ceph-mon[74318]: pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:19 compute-0 podman[250334]: 2026-01-21 23:47:19.547807483 +0000 UTC m=+0.055770932 container create ae81c7cc6f53020cdbaf014d2e65bcd48a2984ee9a3cf091068a4c4e8529363f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:47:19 compute-0 systemd[1]: Started libpod-conmon-ae81c7cc6f53020cdbaf014d2e65bcd48a2984ee9a3cf091068a4c4e8529363f.scope.
Jan 21 23:47:19 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:47:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec93bdcd23d30612b6f2d48d8df92d9b26c5302ff85fca39944ae8cc80de6e08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:47:19 compute-0 podman[250334]: 2026-01-21 23:47:19.525188642 +0000 UTC m=+0.033152171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:47:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec93bdcd23d30612b6f2d48d8df92d9b26c5302ff85fca39944ae8cc80de6e08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:47:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec93bdcd23d30612b6f2d48d8df92d9b26c5302ff85fca39944ae8cc80de6e08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:47:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec93bdcd23d30612b6f2d48d8df92d9b26c5302ff85fca39944ae8cc80de6e08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:47:19 compute-0 podman[250334]: 2026-01-21 23:47:19.632108515 +0000 UTC m=+0.140072024 container init ae81c7cc6f53020cdbaf014d2e65bcd48a2984ee9a3cf091068a4c4e8529363f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:47:19 compute-0 podman[250334]: 2026-01-21 23:47:19.642109865 +0000 UTC m=+0.150073294 container start ae81c7cc6f53020cdbaf014d2e65bcd48a2984ee9a3cf091068a4c4e8529363f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:47:19 compute-0 podman[250334]: 2026-01-21 23:47:19.645338439 +0000 UTC m=+0.153301928 container attach ae81c7cc6f53020cdbaf014d2e65bcd48a2984ee9a3cf091068a4c4e8529363f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keller, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 23:47:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:19.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:20 compute-0 dreamy_keller[250350]: {
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:     "1": [
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:         {
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:             "devices": [
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:                 "/dev/loop3"
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:             ],
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:             "lv_name": "ceph_lv0",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:             "lv_size": "7511998464",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:             "name": "ceph_lv0",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:             "tags": {
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:                 "ceph.cluster_name": "ceph",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:                 "ceph.crush_device_class": "",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:                 "ceph.encrypted": "0",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:                 "ceph.osd_id": "1",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:                 "ceph.type": "block",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:                 "ceph.vdo": "0"
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:             },
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:             "type": "block",
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:             "vg_name": "ceph_vg0"
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:         }
Jan 21 23:47:20 compute-0 dreamy_keller[250350]:     ]
Jan 21 23:47:20 compute-0 dreamy_keller[250350]: }
Jan 21 23:47:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:20.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:20 compute-0 systemd[1]: libpod-ae81c7cc6f53020cdbaf014d2e65bcd48a2984ee9a3cf091068a4c4e8529363f.scope: Deactivated successfully.
Jan 21 23:47:20 compute-0 podman[250334]: 2026-01-21 23:47:20.509508587 +0000 UTC m=+1.017472066 container died ae81c7cc6f53020cdbaf014d2e65bcd48a2984ee9a3cf091068a4c4e8529363f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 21 23:47:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec93bdcd23d30612b6f2d48d8df92d9b26c5302ff85fca39944ae8cc80de6e08-merged.mount: Deactivated successfully.
Jan 21 23:47:20 compute-0 podman[250334]: 2026-01-21 23:47:20.576897039 +0000 UTC m=+1.084860488 container remove ae81c7cc6f53020cdbaf014d2e65bcd48a2984ee9a3cf091068a4c4e8529363f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keller, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:47:20 compute-0 systemd[1]: libpod-conmon-ae81c7cc6f53020cdbaf014d2e65bcd48a2984ee9a3cf091068a4c4e8529363f.scope: Deactivated successfully.
Jan 21 23:47:20 compute-0 sudo[250201]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:20 compute-0 sudo[250371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:47:20 compute-0 sudo[250371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:20 compute-0 sudo[250371]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:20 compute-0 sudo[250396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:47:20 compute-0 sudo[250396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:20 compute-0 sudo[250396]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:20 compute-0 sudo[250421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:47:20 compute-0 sudo[250421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:20 compute-0 sudo[250421]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:20 compute-0 sudo[250446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:47:20 compute-0 sudo[250446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:21 compute-0 podman[250513]: 2026-01-21 23:47:21.344657599 +0000 UTC m=+0.049562344 container create 087a784b9fe24f6005aecd13ffac52b81855c840cf68bbe3e291ada8606b04fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:47:21 compute-0 systemd[1]: Started libpod-conmon-087a784b9fe24f6005aecd13ffac52b81855c840cf68bbe3e291ada8606b04fd.scope.
Jan 21 23:47:21 compute-0 podman[250513]: 2026-01-21 23:47:21.323839104 +0000 UTC m=+0.028743829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:47:21 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:47:21 compute-0 ceph-mon[74318]: pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:21 compute-0 podman[250513]: 2026-01-21 23:47:21.446480601 +0000 UTC m=+0.151385406 container init 087a784b9fe24f6005aecd13ffac52b81855c840cf68bbe3e291ada8606b04fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:47:21 compute-0 podman[250513]: 2026-01-21 23:47:21.458830575 +0000 UTC m=+0.163735310 container start 087a784b9fe24f6005aecd13ffac52b81855c840cf68bbe3e291ada8606b04fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 23:47:21 compute-0 podman[250513]: 2026-01-21 23:47:21.462916225 +0000 UTC m=+0.167820950 container attach 087a784b9fe24f6005aecd13ffac52b81855c840cf68bbe3e291ada8606b04fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 21 23:47:21 compute-0 competent_carver[250530]: 167 167
Jan 21 23:47:21 compute-0 systemd[1]: libpod-087a784b9fe24f6005aecd13ffac52b81855c840cf68bbe3e291ada8606b04fd.scope: Deactivated successfully.
Jan 21 23:47:21 compute-0 podman[250513]: 2026-01-21 23:47:21.467204073 +0000 UTC m=+0.172108818 container died 087a784b9fe24f6005aecd13ffac52b81855c840cf68bbe3e291ada8606b04fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 23:47:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-813a1a703d45178495216d9fa16625a9efc82151e4f80dabdf9e9b2da7c14c72-merged.mount: Deactivated successfully.
Jan 21 23:47:21 compute-0 podman[250513]: 2026-01-21 23:47:21.510242907 +0000 UTC m=+0.215147642 container remove 087a784b9fe24f6005aecd13ffac52b81855c840cf68bbe3e291ada8606b04fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:47:21 compute-0 systemd[1]: libpod-conmon-087a784b9fe24f6005aecd13ffac52b81855c840cf68bbe3e291ada8606b04fd.scope: Deactivated successfully.
Jan 21 23:47:21 compute-0 podman[250554]: 2026-01-21 23:47:21.747888407 +0000 UTC m=+0.064212341 container create bdd364fe2fcc49d4ef2fd0241ce5cb0af1a72613ece7fbae2470610b3a54840d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_murdock, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Jan 21 23:47:21 compute-0 systemd[1]: Started libpod-conmon-bdd364fe2fcc49d4ef2fd0241ce5cb0af1a72613ece7fbae2470610b3a54840d.scope.
Jan 21 23:47:21 compute-0 podman[250554]: 2026-01-21 23:47:21.719862572 +0000 UTC m=+0.036186596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:47:21 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25555c332273ef57cfc41a6413664e57ae306979c5bad5815b5cf27b99a342e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25555c332273ef57cfc41a6413664e57ae306979c5bad5815b5cf27b99a342e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25555c332273ef57cfc41a6413664e57ae306979c5bad5815b5cf27b99a342e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25555c332273ef57cfc41a6413664e57ae306979c5bad5815b5cf27b99a342e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:47:21 compute-0 podman[250554]: 2026-01-21 23:47:21.839844144 +0000 UTC m=+0.156168148 container init bdd364fe2fcc49d4ef2fd0241ce5cb0af1a72613ece7fbae2470610b3a54840d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:47:21 compute-0 podman[250554]: 2026-01-21 23:47:21.853953214 +0000 UTC m=+0.170277168 container start bdd364fe2fcc49d4ef2fd0241ce5cb0af1a72613ece7fbae2470610b3a54840d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_murdock, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Jan 21 23:47:21 compute-0 podman[250554]: 2026-01-21 23:47:21.858459528 +0000 UTC m=+0.174783542 container attach bdd364fe2fcc49d4ef2fd0241ce5cb0af1a72613ece7fbae2470610b3a54840d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_murdock, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 21 23:47:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:21.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:22.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:47:22 compute-0 nostalgic_murdock[250572]: {
Jan 21 23:47:22 compute-0 nostalgic_murdock[250572]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:47:22 compute-0 nostalgic_murdock[250572]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:47:22 compute-0 nostalgic_murdock[250572]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:47:22 compute-0 nostalgic_murdock[250572]:         "osd_id": 1,
Jan 21 23:47:22 compute-0 nostalgic_murdock[250572]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:47:22 compute-0 nostalgic_murdock[250572]:         "type": "bluestore"
Jan 21 23:47:22 compute-0 nostalgic_murdock[250572]:     }
Jan 21 23:47:22 compute-0 nostalgic_murdock[250572]: }
Jan 21 23:47:22 compute-0 systemd[1]: libpod-bdd364fe2fcc49d4ef2fd0241ce5cb0af1a72613ece7fbae2470610b3a54840d.scope: Deactivated successfully.
Jan 21 23:47:22 compute-0 podman[250554]: 2026-01-21 23:47:22.813177558 +0000 UTC m=+1.129501472 container died bdd364fe2fcc49d4ef2fd0241ce5cb0af1a72613ece7fbae2470610b3a54840d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_murdock, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:47:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-25555c332273ef57cfc41a6413664e57ae306979c5bad5815b5cf27b99a342e6-merged.mount: Deactivated successfully.
Jan 21 23:47:22 compute-0 podman[250554]: 2026-01-21 23:47:22.863579158 +0000 UTC m=+1.179903072 container remove bdd364fe2fcc49d4ef2fd0241ce5cb0af1a72613ece7fbae2470610b3a54840d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:47:22 compute-0 systemd[1]: libpod-conmon-bdd364fe2fcc49d4ef2fd0241ce5cb0af1a72613ece7fbae2470610b3a54840d.scope: Deactivated successfully.
Jan 21 23:47:22 compute-0 sudo[250446]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:47:22 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:47:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:47:22 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:47:22 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 0de9b46e-a46a-41a9-8b23-43ccb1e75514 does not exist
Jan 21 23:47:22 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 24e35dd4-69b4-416f-9d1c-b19c9d8e6d2e does not exist
Jan 21 23:47:22 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 5c7c5344-bce6-4074-9b1c-652d910d43c8 does not exist
Jan 21 23:47:22 compute-0 sudo[250604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:47:22 compute-0 sudo[250604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:22 compute-0 sudo[250604]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:23 compute-0 sudo[250629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:47:23 compute-0 sudo[250629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:23 compute-0 sudo[250629]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:23 compute-0 ceph-mon[74318]: pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:47:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:47:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:23.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:24 compute-0 sudo[250655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:47:24 compute-0 sudo[250655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:24 compute-0 sudo[250655]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:24 compute-0 sudo[250680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:47:24 compute-0 sudo[250680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:24 compute-0 sudo[250680]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:24.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:25 compute-0 ceph-mon[74318]: pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:25.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:47:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:26.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:47:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2423347457' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:47:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2423347457' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:47:26 compute-0 ceph-mon[74318]: pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:47:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:47:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:27.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:47:27 compute-0 podman[250707]: 2026-01-21 23:47:27.993701127 +0000 UTC m=+0.096527465 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 21 23:47:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:28.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:29 compute-0 ceph-mon[74318]: pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:29.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:30.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:31 compute-0 ceph-mon[74318]: pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:47:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:31.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:47:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:32.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:47:33 compute-0 ceph-mon[74318]: pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:33.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:34.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:35 compute-0 ceph-mon[74318]: pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:35.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:36.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:36 compute-0 ceph-mon[74318]: pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:47:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:37.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:38.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:47:39
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.control', 'images', '.mgr', 'volumes', '.rgw.root']
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:47:39 compute-0 ceph-mon[74318]: pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:47:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:47:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:39.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:40.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:41 compute-0 ceph-mon[74318]: pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:41.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:47:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:42.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:47:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:47:43 compute-0 ceph-mon[74318]: pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:47:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:43.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:47:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:44 compute-0 sudo[250734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:47:44 compute-0 sudo[250734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:44 compute-0 sudo[250734]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:44 compute-0 sudo[250759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:47:44 compute-0 sudo[250759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:47:44 compute-0 sudo[250759]: pam_unix(sudo:session): session closed for user root
Jan 21 23:47:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:44.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:45 compute-0 ceph-mon[74318]: pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:47:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:45.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:47:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:46.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:47 compute-0 ceph-mon[74318]: pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:47:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:47.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:48.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:47:48.743 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:47:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:47:48.744 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:47:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:47:48.744 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:47:49 compute-0 ceph-mon[74318]: pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:49.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:50 compute-0 podman[250787]: 2026-01-21 23:47:50.048914286 +0000 UTC m=+0.149557177 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 21 23:47:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:50.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:51 compute-0 ceph-mon[74318]: pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:47:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:51.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:47:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:52.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:47:53 compute-0 ceph-mon[74318]: pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:53.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:47:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:47:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:54.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:47:55 compute-0 ceph-mon[74318]: pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:55.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:56 compute-0 ceph-mon[74318]: pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:56.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:47:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:57.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:47:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:47:58.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:47:58 compute-0 podman[250817]: 2026-01-21 23:47:58.938592931 +0000 UTC m=+0.057636301 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 21 23:47:59 compute-0 ceph-mon[74318]: pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:47:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:47:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:47:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:47:59.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:48:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:00.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:01 compute-0 ceph-mon[74318]: pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:01.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:02.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:48:03 compute-0 ceph-mon[74318]: pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:03.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:04 compute-0 sudo[250839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:48:04 compute-0 sudo[250839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:04 compute-0 sudo[250839]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:48:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:04.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:48:04 compute-0 sudo[250864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:48:04 compute-0 sudo[250864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:04 compute-0 sudo[250864]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:05 compute-0 ceph-mon[74318]: pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:05.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:06 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/572174131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:48:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:48:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:06.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:48:07 compute-0 nova_compute[247516]: 2026-01-21 23:48:07.412 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:48:07 compute-0 nova_compute[247516]: 2026-01-21 23:48:07.413 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:48:07 compute-0 nova_compute[247516]: 2026-01-21 23:48:07.414 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 23:48:07 compute-0 nova_compute[247516]: 2026-01-21 23:48:07.414 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 23:48:07 compute-0 ceph-mon[74318]: pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:07 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3130867017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:48:07 compute-0 nova_compute[247516]: 2026-01-21 23:48:07.466 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 23:48:07 compute-0 nova_compute[247516]: 2026-01-21 23:48:07.467 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:48:07 compute-0 nova_compute[247516]: 2026-01-21 23:48:07.467 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:48:07 compute-0 nova_compute[247516]: 2026-01-21 23:48:07.467 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:48:07 compute-0 nova_compute[247516]: 2026-01-21 23:48:07.468 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:48:07 compute-0 nova_compute[247516]: 2026-01-21 23:48:07.468 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:48:07 compute-0 nova_compute[247516]: 2026-01-21 23:48:07.468 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 23:48:07 compute-0 nova_compute[247516]: 2026-01-21 23:48:07.468 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:48:07 compute-0 nova_compute[247516]: 2026-01-21 23:48:07.591 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:48:07 compute-0 nova_compute[247516]: 2026-01-21 23:48:07.592 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:48:07 compute-0 nova_compute[247516]: 2026-01-21 23:48:07.592 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:48:07 compute-0 nova_compute[247516]: 2026-01-21 23:48:07.593 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 23:48:07 compute-0 nova_compute[247516]: 2026-01-21 23:48:07.594 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:48:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:48:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:48:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:07.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:48:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:48:08 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1010637715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:48:08 compute-0 nova_compute[247516]: 2026-01-21 23:48:08.116 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:48:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:08 compute-0 nova_compute[247516]: 2026-01-21 23:48:08.383 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 23:48:08 compute-0 nova_compute[247516]: 2026-01-21 23:48:08.384 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5242MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 23:48:08 compute-0 nova_compute[247516]: 2026-01-21 23:48:08.385 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:48:08 compute-0 nova_compute[247516]: 2026-01-21 23:48:08.385 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:48:08 compute-0 nova_compute[247516]: 2026-01-21 23:48:08.556 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 23:48:08 compute-0 nova_compute[247516]: 2026-01-21 23:48:08.557 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 23:48:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:08.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:08 compute-0 nova_compute[247516]: 2026-01-21 23:48:08.578 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:48:08 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1010637715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:48:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:48:08 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2153129499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:48:09 compute-0 nova_compute[247516]: 2026-01-21 23:48:09.011 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:48:09 compute-0 nova_compute[247516]: 2026-01-21 23:48:09.020 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 23:48:09 compute-0 nova_compute[247516]: 2026-01-21 23:48:09.151 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 23:48:09 compute-0 nova_compute[247516]: 2026-01-21 23:48:09.154 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 23:48:09 compute-0 nova_compute[247516]: 2026-01-21 23:48:09.155 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:48:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:48:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:48:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:48:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:48:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:48:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:48:09 compute-0 nova_compute[247516]: 2026-01-21 23:48:09.680 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:48:09 compute-0 ceph-mon[74318]: pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:09 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1572390434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:48:09 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2153129499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:48:09 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2191170582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:48:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 21 23:48:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:09.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 21 23:48:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:10.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:10 compute-0 ceph-mon[74318]: pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:48:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:11.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:48:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:12.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:48:13 compute-0 ceph-mon[74318]: pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:13.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:14.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:15 compute-0 ceph-mon[74318]: pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:15.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:16.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:17 compute-0 ceph-mon[74318]: pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:48:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:48:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:17.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:48:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:18.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:19 compute-0 ceph-mon[74318]: pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:48:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:19.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:48:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:20.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:21 compute-0 podman[250941]: 2026-01-21 23:48:21.032165185 +0000 UTC m=+0.138107291 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 21 23:48:21 compute-0 ceph-mon[74318]: pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:21.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:22.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:48:23 compute-0 sudo[250970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:48:23 compute-0 sudo[250970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:23 compute-0 sudo[250970]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:23 compute-0 ceph-mon[74318]: pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:23 compute-0 sudo[250996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:48:23 compute-0 sudo[250996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:23 compute-0 sudo[250996]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:23 compute-0 sudo[251021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:48:23 compute-0 sudo[251021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:23 compute-0 sudo[251021]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:23 compute-0 sudo[251046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:48:23 compute-0 sudo[251046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:23.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:24 compute-0 sudo[251046]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:48:24 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:48:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:48:24 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:48:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:48:24 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:48:24 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 04a741b6-deed-41b3-89ad-85066dd24747 does not exist
Jan 21 23:48:24 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev d85d247a-5412-46ca-ae24-fbaeae0168eb does not exist
Jan 21 23:48:24 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev d4113a3f-17fe-4c16-8959-36adda3fd4c2 does not exist
Jan 21 23:48:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:48:24 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:48:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:48:24 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:48:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:48:24 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:48:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:24 compute-0 sudo[251102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:48:24 compute-0 sudo[251102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:24 compute-0 sudo[251102]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:24 compute-0 sudo[251127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:48:24 compute-0 sudo[251127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:48:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:48:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:48:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:48:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:48:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:48:24 compute-0 sudo[251127]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:24 compute-0 sudo[251152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:48:24 compute-0 sudo[251152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:24 compute-0 sudo[251152]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:24.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:24 compute-0 sudo[251177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:48:24 compute-0 sudo[251177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:24 compute-0 sudo[251201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:48:24 compute-0 sudo[251201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:24 compute-0 sudo[251201]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:24 compute-0 sudo[251227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:48:24 compute-0 sudo[251227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:24 compute-0 sudo[251227]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:25 compute-0 podman[251294]: 2026-01-21 23:48:25.091209899 +0000 UTC m=+0.061099193 container create 60c056c019356f501e28c2bc02b6ba7e8d4943a753c155ed5c740fcb2abc0ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:48:25 compute-0 systemd[1]: Started libpod-conmon-60c056c019356f501e28c2bc02b6ba7e8d4943a753c155ed5c740fcb2abc0ca6.scope.
Jan 21 23:48:25 compute-0 podman[251294]: 2026-01-21 23:48:25.06025857 +0000 UTC m=+0.030147904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:48:25 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:48:25 compute-0 podman[251294]: 2026-01-21 23:48:25.193369101 +0000 UTC m=+0.163258385 container init 60c056c019356f501e28c2bc02b6ba7e8d4943a753c155ed5c740fcb2abc0ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:48:25 compute-0 podman[251294]: 2026-01-21 23:48:25.20270699 +0000 UTC m=+0.172596254 container start 60c056c019356f501e28c2bc02b6ba7e8d4943a753c155ed5c740fcb2abc0ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:48:25 compute-0 podman[251294]: 2026-01-21 23:48:25.207089459 +0000 UTC m=+0.176978763 container attach 60c056c019356f501e28c2bc02b6ba7e8d4943a753c155ed5c740fcb2abc0ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rubin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 21 23:48:25 compute-0 agitated_rubin[251310]: 167 167
Jan 21 23:48:25 compute-0 systemd[1]: libpod-60c056c019356f501e28c2bc02b6ba7e8d4943a753c155ed5c740fcb2abc0ca6.scope: Deactivated successfully.
Jan 21 23:48:25 compute-0 conmon[251310]: conmon 60c056c019356f501e28 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-60c056c019356f501e28c2bc02b6ba7e8d4943a753c155ed5c740fcb2abc0ca6.scope/container/memory.events
Jan 21 23:48:25 compute-0 podman[251315]: 2026-01-21 23:48:25.252797089 +0000 UTC m=+0.026306871 container died 60c056c019356f501e28c2bc02b6ba7e8d4943a753c155ed5c740fcb2abc0ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:48:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3b79cc836fc57d928cdc303299d28a8f3875d5307e5e75e0d77336b5db0052e-merged.mount: Deactivated successfully.
Jan 21 23:48:25 compute-0 podman[251315]: 2026-01-21 23:48:25.293518359 +0000 UTC m=+0.067028111 container remove 60c056c019356f501e28c2bc02b6ba7e8d4943a753c155ed5c740fcb2abc0ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:48:25 compute-0 systemd[1]: libpod-conmon-60c056c019356f501e28c2bc02b6ba7e8d4943a753c155ed5c740fcb2abc0ca6.scope: Deactivated successfully.
Jan 21 23:48:25 compute-0 ceph-mon[74318]: pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:25 compute-0 podman[251339]: 2026-01-21 23:48:25.510387146 +0000 UTC m=+0.046165826 container create a94cecab5fb17400f6e8a1c7a4322d86f2e32224b2a7a63e2398083e8e9872d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ritchie, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 23:48:25 compute-0 systemd[1]: Started libpod-conmon-a94cecab5fb17400f6e8a1c7a4322d86f2e32224b2a7a63e2398083e8e9872d8.scope.
Jan 21 23:48:25 compute-0 podman[251339]: 2026-01-21 23:48:25.493503597 +0000 UTC m=+0.029282307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:48:25 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/623e1d2dce903eb07746b2b029befee55a05c310c904ba79ef28c4ba3175e7b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/623e1d2dce903eb07746b2b029befee55a05c310c904ba79ef28c4ba3175e7b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/623e1d2dce903eb07746b2b029befee55a05c310c904ba79ef28c4ba3175e7b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/623e1d2dce903eb07746b2b029befee55a05c310c904ba79ef28c4ba3175e7b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/623e1d2dce903eb07746b2b029befee55a05c310c904ba79ef28c4ba3175e7b5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:48:25 compute-0 podman[251339]: 2026-01-21 23:48:25.611136183 +0000 UTC m=+0.146914943 container init a94cecab5fb17400f6e8a1c7a4322d86f2e32224b2a7a63e2398083e8e9872d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ritchie, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:48:25 compute-0 podman[251339]: 2026-01-21 23:48:25.621309178 +0000 UTC m=+0.157087858 container start a94cecab5fb17400f6e8a1c7a4322d86f2e32224b2a7a63e2398083e8e9872d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 21 23:48:25 compute-0 podman[251339]: 2026-01-21 23:48:25.625882355 +0000 UTC m=+0.161661075 container attach a94cecab5fb17400f6e8a1c7a4322d86f2e32224b2a7a63e2398083e8e9872d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:48:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:25.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3574427329' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:48:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3574427329' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:48:26 compute-0 laughing_ritchie[251356]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:48:26 compute-0 laughing_ritchie[251356]: --> relative data size: 1.0
Jan 21 23:48:26 compute-0 laughing_ritchie[251356]: --> All data devices are unavailable
Jan 21 23:48:26 compute-0 systemd[1]: libpod-a94cecab5fb17400f6e8a1c7a4322d86f2e32224b2a7a63e2398083e8e9872d8.scope: Deactivated successfully.
Jan 21 23:48:26 compute-0 podman[251339]: 2026-01-21 23:48:26.525615008 +0000 UTC m=+1.061393758 container died a94cecab5fb17400f6e8a1c7a4322d86f2e32224b2a7a63e2398083e8e9872d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 21 23:48:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:26.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-623e1d2dce903eb07746b2b029befee55a05c310c904ba79ef28c4ba3175e7b5-merged.mount: Deactivated successfully.
Jan 21 23:48:26 compute-0 podman[251339]: 2026-01-21 23:48:26.72197855 +0000 UTC m=+1.257757280 container remove a94cecab5fb17400f6e8a1c7a4322d86f2e32224b2a7a63e2398083e8e9872d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ritchie, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:48:26 compute-0 systemd[1]: libpod-conmon-a94cecab5fb17400f6e8a1c7a4322d86f2e32224b2a7a63e2398083e8e9872d8.scope: Deactivated successfully.
Jan 21 23:48:26 compute-0 sudo[251177]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:26 compute-0 sudo[251384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:48:26 compute-0 sudo[251384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:26 compute-0 sudo[251384]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:26 compute-0 sudo[251409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:48:26 compute-0 sudo[251409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:26 compute-0 sudo[251409]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:27 compute-0 sudo[251434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:48:27 compute-0 sudo[251434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:27 compute-0 sudo[251434]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:27 compute-0 sudo[251459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:48:27 compute-0 sudo[251459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:27 compute-0 ceph-mon[74318]: pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:27 compute-0 podman[251527]: 2026-01-21 23:48:27.534723026 +0000 UTC m=+0.062766746 container create 00e9ddaad937a14b22fbf3f26ca4a64c0be394871d679053c4f6e621aceded73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:48:27 compute-0 systemd[1]: Started libpod-conmon-00e9ddaad937a14b22fbf3f26ca4a64c0be394871d679053c4f6e621aceded73.scope.
Jan 21 23:48:27 compute-0 podman[251527]: 2026-01-21 23:48:27.504179691 +0000 UTC m=+0.032223461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:48:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:48:27 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:48:27 compute-0 podman[251527]: 2026-01-21 23:48:27.64508004 +0000 UTC m=+0.173123800 container init 00e9ddaad937a14b22fbf3f26ca4a64c0be394871d679053c4f6e621aceded73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:48:27 compute-0 podman[251527]: 2026-01-21 23:48:27.657409824 +0000 UTC m=+0.185453544 container start 00e9ddaad937a14b22fbf3f26ca4a64c0be394871d679053c4f6e621aceded73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 21 23:48:27 compute-0 podman[251527]: 2026-01-21 23:48:27.662108004 +0000 UTC m=+0.190151764 container attach 00e9ddaad937a14b22fbf3f26ca4a64c0be394871d679053c4f6e621aceded73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chaplygin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 21 23:48:27 compute-0 thirsty_chaplygin[251543]: 167 167
Jan 21 23:48:27 compute-0 systemd[1]: libpod-00e9ddaad937a14b22fbf3f26ca4a64c0be394871d679053c4f6e621aceded73.scope: Deactivated successfully.
Jan 21 23:48:27 compute-0 podman[251527]: 2026-01-21 23:48:27.66762631 +0000 UTC m=+0.195669990 container died 00e9ddaad937a14b22fbf3f26ca4a64c0be394871d679053c4f6e621aceded73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:48:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-44819b286c2f54da8a3445d1152100d426e255b61fc90efe60d98bd89ff68d02-merged.mount: Deactivated successfully.
Jan 21 23:48:27 compute-0 podman[251527]: 2026-01-21 23:48:27.711406019 +0000 UTC m=+0.239449709 container remove 00e9ddaad937a14b22fbf3f26ca4a64c0be394871d679053c4f6e621aceded73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chaplygin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 21 23:48:27 compute-0 systemd[1]: libpod-conmon-00e9ddaad937a14b22fbf3f26ca4a64c0be394871d679053c4f6e621aceded73.scope: Deactivated successfully.
Jan 21 23:48:27 compute-0 podman[251568]: 2026-01-21 23:48:27.949169922 +0000 UTC m=+0.047824618 container create 12979b4e79f010d60138e0a3ced2d6f4e14c2948f1cb166b9e2c66f7b1f35d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 21 23:48:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:27.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:27 compute-0 systemd[1]: Started libpod-conmon-12979b4e79f010d60138e0a3ced2d6f4e14c2948f1cb166b9e2c66f7b1f35d3a.scope.
Jan 21 23:48:28 compute-0 podman[251568]: 2026-01-21 23:48:27.925250338 +0000 UTC m=+0.023904994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:48:28 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a1093918562e5f9c86c5ddd8c568dbbb3e4f5cde46191cf31f09620203afb48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a1093918562e5f9c86c5ddd8c568dbbb3e4f5cde46191cf31f09620203afb48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a1093918562e5f9c86c5ddd8c568dbbb3e4f5cde46191cf31f09620203afb48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a1093918562e5f9c86c5ddd8c568dbbb3e4f5cde46191cf31f09620203afb48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:48:28 compute-0 podman[251568]: 2026-01-21 23:48:28.185946134 +0000 UTC m=+0.284600870 container init 12979b4e79f010d60138e0a3ced2d6f4e14c2948f1cb166b9e2c66f7b1f35d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 23:48:28 compute-0 podman[251568]: 2026-01-21 23:48:28.198369091 +0000 UTC m=+0.297023747 container start 12979b4e79f010d60138e0a3ced2d6f4e14c2948f1cb166b9e2c66f7b1f35d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 21 23:48:28 compute-0 podman[251568]: 2026-01-21 23:48:28.201762699 +0000 UTC m=+0.300417395 container attach 12979b4e79f010d60138e0a3ced2d6f4e14c2948f1cb166b9e2c66f7b1f35d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 23:48:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:28 compute-0 ceph-mon[74318]: pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:28.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]: {
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:     "1": [
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:         {
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:             "devices": [
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:                 "/dev/loop3"
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:             ],
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:             "lv_name": "ceph_lv0",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:             "lv_size": "7511998464",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:             "name": "ceph_lv0",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:             "tags": {
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:                 "ceph.cluster_name": "ceph",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:                 "ceph.crush_device_class": "",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:                 "ceph.encrypted": "0",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:                 "ceph.osd_id": "1",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:                 "ceph.type": "block",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:                 "ceph.vdo": "0"
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:             },
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:             "type": "block",
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:             "vg_name": "ceph_vg0"
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:         }
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]:     ]
Jan 21 23:48:29 compute-0 dazzling_hypatia[251584]: }
Jan 21 23:48:29 compute-0 systemd[1]: libpod-12979b4e79f010d60138e0a3ced2d6f4e14c2948f1cb166b9e2c66f7b1f35d3a.scope: Deactivated successfully.
Jan 21 23:48:29 compute-0 podman[251568]: 2026-01-21 23:48:29.049034138 +0000 UTC m=+1.147688804 container died 12979b4e79f010d60138e0a3ced2d6f4e14c2948f1cb166b9e2c66f7b1f35d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 21 23:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a1093918562e5f9c86c5ddd8c568dbbb3e4f5cde46191cf31f09620203afb48-merged.mount: Deactivated successfully.
Jan 21 23:48:29 compute-0 podman[251568]: 2026-01-21 23:48:29.102119814 +0000 UTC m=+1.200774470 container remove 12979b4e79f010d60138e0a3ced2d6f4e14c2948f1cb166b9e2c66f7b1f35d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 21 23:48:29 compute-0 systemd[1]: libpod-conmon-12979b4e79f010d60138e0a3ced2d6f4e14c2948f1cb166b9e2c66f7b1f35d3a.scope: Deactivated successfully.
Jan 21 23:48:29 compute-0 sudo[251459]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:29 compute-0 podman[251594]: 2026-01-21 23:48:29.149077183 +0000 UTC m=+0.061797095 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 21 23:48:29 compute-0 sudo[251624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:48:29 compute-0 sudo[251624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:29 compute-0 sudo[251624]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:29 compute-0 sudo[251650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:48:29 compute-0 sudo[251650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:29 compute-0 sudo[251650]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=404 latency=0.004000129s ======
Jan 21 23:48:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:29.291 +0000] "GET /info HTTP/1.1" 404 150 - "python-urllib3/1.26.5" - latency=0.004000129s
Jan 21 23:48:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:48:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - - [21/Jan/2026:23:48:29.311 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.001000032s
Jan 21 23:48:29 compute-0 sudo[251675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:48:29 compute-0 sudo[251675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:29 compute-0 sudo[251675]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:29 compute-0 sudo[251701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:48:29 compute-0 sudo[251701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:29 compute-0 podman[251766]: 2026-01-21 23:48:29.885376808 +0000 UTC m=+0.062410794 container create e2f9c1b83b860be2a069ec83b39abedf8a00968905723cb4644315e797f0b19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_northcutt, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:48:29 compute-0 systemd[1]: Started libpod-conmon-e2f9c1b83b860be2a069ec83b39abedf8a00968905723cb4644315e797f0b19f.scope.
Jan 21 23:48:29 compute-0 podman[251766]: 2026-01-21 23:48:29.861343121 +0000 UTC m=+0.038377127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:48:29 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:48:29 compute-0 podman[251766]: 2026-01-21 23:48:29.979664799 +0000 UTC m=+0.156698805 container init e2f9c1b83b860be2a069ec83b39abedf8a00968905723cb4644315e797f0b19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_northcutt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 21 23:48:29 compute-0 podman[251766]: 2026-01-21 23:48:29.98718068 +0000 UTC m=+0.164214676 container start e2f9c1b83b860be2a069ec83b39abedf8a00968905723cb4644315e797f0b19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 21 23:48:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:48:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:29.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:48:29 compute-0 podman[251766]: 2026-01-21 23:48:29.992243631 +0000 UTC m=+0.169277677 container attach e2f9c1b83b860be2a069ec83b39abedf8a00968905723cb4644315e797f0b19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_northcutt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 21 23:48:29 compute-0 blissful_northcutt[251783]: 167 167
Jan 21 23:48:29 compute-0 systemd[1]: libpod-e2f9c1b83b860be2a069ec83b39abedf8a00968905723cb4644315e797f0b19f.scope: Deactivated successfully.
Jan 21 23:48:29 compute-0 podman[251766]: 2026-01-21 23:48:29.996231568 +0000 UTC m=+0.173265554 container died e2f9c1b83b860be2a069ec83b39abedf8a00968905723cb4644315e797f0b19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:48:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e07faa08e7253a11b65de14a678e12c7ee6b8f7d5790eedf0984b01e6a640ab4-merged.mount: Deactivated successfully.
Jan 21 23:48:30 compute-0 podman[251766]: 2026-01-21 23:48:30.044178479 +0000 UTC m=+0.221212465 container remove e2f9c1b83b860be2a069ec83b39abedf8a00968905723cb4644315e797f0b19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 21 23:48:30 compute-0 systemd[1]: libpod-conmon-e2f9c1b83b860be2a069ec83b39abedf8a00968905723cb4644315e797f0b19f.scope: Deactivated successfully.
Jan 21 23:48:30 compute-0 podman[251808]: 2026-01-21 23:48:30.2123528 +0000 UTC m=+0.041132074 container create 0097128ed31a46d51544a58051e50f42afad610ebe4f57a648f253463ebeebd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cartwright, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 21 23:48:30 compute-0 systemd[1]: Started libpod-conmon-0097128ed31a46d51544a58051e50f42afad610ebe4f57a648f253463ebeebd5.scope.
Jan 21 23:48:30 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9599c46287462c17514b995e54723a9e3611bbdad2f485c69a20407f2c01927f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9599c46287462c17514b995e54723a9e3611bbdad2f485c69a20407f2c01927f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:48:30 compute-0 podman[251808]: 2026-01-21 23:48:30.196218145 +0000 UTC m=+0.024997429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9599c46287462c17514b995e54723a9e3611bbdad2f485c69a20407f2c01927f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9599c46287462c17514b995e54723a9e3611bbdad2f485c69a20407f2c01927f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:48:30 compute-0 podman[251808]: 2026-01-21 23:48:30.304468602 +0000 UTC m=+0.133247946 container init 0097128ed31a46d51544a58051e50f42afad610ebe4f57a648f253463ebeebd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 23:48:30 compute-0 podman[251808]: 2026-01-21 23:48:30.316810887 +0000 UTC m=+0.145590191 container start 0097128ed31a46d51544a58051e50f42afad610ebe4f57a648f253463ebeebd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cartwright, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:48:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:30 compute-0 podman[251808]: 2026-01-21 23:48:30.321175286 +0000 UTC m=+0.149954590 container attach 0097128ed31a46d51544a58051e50f42afad610ebe4f57a648f253463ebeebd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cartwright, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Jan 21 23:48:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:30.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:31 compute-0 angry_cartwright[251826]: {
Jan 21 23:48:31 compute-0 angry_cartwright[251826]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:48:31 compute-0 angry_cartwright[251826]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:48:31 compute-0 angry_cartwright[251826]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:48:31 compute-0 angry_cartwright[251826]:         "osd_id": 1,
Jan 21 23:48:31 compute-0 angry_cartwright[251826]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:48:31 compute-0 angry_cartwright[251826]:         "type": "bluestore"
Jan 21 23:48:31 compute-0 angry_cartwright[251826]:     }
Jan 21 23:48:31 compute-0 angry_cartwright[251826]: }
Jan 21 23:48:31 compute-0 systemd[1]: libpod-0097128ed31a46d51544a58051e50f42afad610ebe4f57a648f253463ebeebd5.scope: Deactivated successfully.
Jan 21 23:48:31 compute-0 podman[251808]: 2026-01-21 23:48:31.222510191 +0000 UTC m=+1.051289495 container died 0097128ed31a46d51544a58051e50f42afad610ebe4f57a648f253463ebeebd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cartwright, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:48:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9599c46287462c17514b995e54723a9e3611bbdad2f485c69a20407f2c01927f-merged.mount: Deactivated successfully.
Jan 21 23:48:31 compute-0 podman[251808]: 2026-01-21 23:48:31.301863536 +0000 UTC m=+1.130642840 container remove 0097128ed31a46d51544a58051e50f42afad610ebe4f57a648f253463ebeebd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cartwright, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 23:48:31 compute-0 systemd[1]: libpod-conmon-0097128ed31a46d51544a58051e50f42afad610ebe4f57a648f253463ebeebd5.scope: Deactivated successfully.
Jan 21 23:48:31 compute-0 sudo[251701]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:48:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:48:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:48:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:48:31 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev d7f91186-93e9-43e3-b83e-742ad6da0d89 does not exist
Jan 21 23:48:31 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 8acfa6bc-e36d-4eba-82df-f6421600d7c2 does not exist
Jan 21 23:48:31 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 338a6048-a523-4f94-a513-ddaf9add3ff5 does not exist
Jan 21 23:48:31 compute-0 ceph-mon[74318]: pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:48:31 compute-0 sudo[251861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:48:31 compute-0 sudo[251861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:31 compute-0 sudo[251861]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:31 compute-0 sudo[251886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:48:31 compute-0 sudo[251886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:31 compute-0 sudo[251886]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:31.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:32 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:48:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:32.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:48:33 compute-0 ceph-mon[74318]: pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:33.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 21 23:48:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 21 23:48:34 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 21 23:48:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:34.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:35 compute-0 ceph-mon[74318]: pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:35 compute-0 ceph-mon[74318]: osdmap e141: 3 total, 3 up, 3 in
Jan 21 23:48:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 21 23:48:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 21 23:48:35 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 21 23:48:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:35.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 21 23:48:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 21 23:48:36 compute-0 ceph-mon[74318]: osdmap e142: 3 total, 3 up, 3 in
Jan 21 23:48:36 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 21 23:48:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:36.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:37 compute-0 ceph-mon[74318]: pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:37 compute-0 ceph-mon[74318]: osdmap e143: 3 total, 3 up, 3 in
Jan 21 23:48:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:48:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:37.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Jan 21 23:48:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:38.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:48:39
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['images', 'vms', 'backups', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.meta']
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:48:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:48:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 21 23:48:39 compute-0 ceph-mon[74318]: pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Jan 21 23:48:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 21 23:48:39 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 21 23:48:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:39.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 43 KiB/s rd, 7.0 MiB/s wr, 63 op/s
Jan 21 23:48:40 compute-0 ceph-mon[74318]: osdmap e144: 3 total, 3 up, 3 in
Jan 21 23:48:40 compute-0 ceph-mon[74318]: pgmap v887: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 43 KiB/s rd, 7.0 MiB/s wr, 63 op/s
Jan 21 23:48:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:40.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:48:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:42.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:48:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 6.0 MiB/s wr, 55 op/s
Jan 21 23:48:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:48:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 21 23:48:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:42.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 21 23:48:42 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 21 23:48:43 compute-0 ceph-mon[74318]: pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 6.0 MiB/s wr, 55 op/s
Jan 21 23:48:43 compute-0 ceph-mon[74318]: osdmap e145: 3 total, 3 up, 3 in
Jan 21 23:48:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:44.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 5.2 MiB/s wr, 48 op/s
Jan 21 23:48:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:44.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:44 compute-0 sudo[251917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:48:44 compute-0 sudo[251917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:44 compute-0 sudo[251917]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:44 compute-0 sudo[251942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:48:44 compute-0 sudo[251942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:48:44 compute-0 sudo[251942]: pam_unix(sudo:session): session closed for user root
Jan 21 23:48:45 compute-0 ceph-mon[74318]: pgmap v890: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 5.2 MiB/s wr, 48 op/s
Jan 21 23:48:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:46.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 46 op/s
Jan 21 23:48:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:46.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:47 compute-0 ceph-mon[74318]: pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 46 op/s
Jan 21 23:48:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:48:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:48.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 4.6 MiB/s wr, 42 op/s
Jan 21 23:48:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:48:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:48.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:48:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:48:48.743 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:48:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:48:48.745 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:48:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:48:48.746 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:48:49 compute-0 ceph-mon[74318]: pgmap v892: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 4.6 MiB/s wr, 42 op/s
Jan 21 23:48:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:50.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 818 B/s rd, 102 B/s wr, 0 op/s
Jan 21 23:48:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:50.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:51 compute-0 ceph-mon[74318]: pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 818 B/s rd, 102 B/s wr, 0 op/s
Jan 21 23:48:51 compute-0 podman[251971]: 2026-01-21 23:48:51.989644433 +0000 UTC m=+0.102377301 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 21 23:48:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:52.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Jan 21 23:48:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:48:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:52.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:53 compute-0 ceph-mon[74318]: pgmap v894: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Jan 21 23:48:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:54.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:54 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:48:54.111 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 23:48:54 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:48:54.112 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:48:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 87 B/s rd, 0 B/s wr, 0 op/s
Jan 21 23:48:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:48:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:54.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:48:55 compute-0 ceph-mon[74318]: pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 87 B/s rd, 0 B/s wr, 0 op/s
Jan 21 23:48:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:56.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 21 23:48:56 compute-0 ceph-mon[74318]: pgmap v896: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 21 23:48:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:56.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:48:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:48:58.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:48:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:48:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:48:58.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:48:59 compute-0 ceph-mon[74318]: pgmap v897: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:48:59 compute-0 podman[252001]: 2026-01-21 23:48:59.954885627 +0000 UTC m=+0.069939646 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:49:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:00.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:00.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:01 compute-0 ceph-mon[74318]: pgmap v898: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:02.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:49:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:49:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:02.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:49:03 compute-0 ceph-mon[74318]: pgmap v899: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:04.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:04 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:49:04.115 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 23:49:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:04.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:04 compute-0 nova_compute[247516]: 2026-01-21 23:49:04.987 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:49:05 compute-0 sudo[252024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:49:05 compute-0 sudo[252024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:05 compute-0 sudo[252024]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:05 compute-0 sudo[252049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:49:05 compute-0 sudo[252049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:05 compute-0 sudo[252049]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:05 compute-0 ceph-mon[74318]: pgmap v900: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:05 compute-0 nova_compute[247516]: 2026-01-21 23:49:05.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:49:05 compute-0 nova_compute[247516]: 2026-01-21 23:49:05.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 23:49:05 compute-0 nova_compute[247516]: 2026-01-21 23:49:05.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 23:49:06 compute-0 nova_compute[247516]: 2026-01-21 23:49:06.009 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 23:49:06 compute-0 nova_compute[247516]: 2026-01-21 23:49:06.009 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:49:06 compute-0 nova_compute[247516]: 2026-01-21 23:49:06.009 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 23:49:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:06.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:06 compute-0 ceph-mon[74318]: pgmap v901: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:06.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:06 compute-0 nova_compute[247516]: 2026-01-21 23:49:06.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:49:06 compute-0 nova_compute[247516]: 2026-01-21 23:49:06.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:49:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:49:07 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1542896888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:49:07 compute-0 nova_compute[247516]: 2026-01-21 23:49:07.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:49:07 compute-0 nova_compute[247516]: 2026-01-21 23:49:07.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:49:07 compute-0 nova_compute[247516]: 2026-01-21 23:49:07.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:49:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:08.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:08 compute-0 nova_compute[247516]: 2026-01-21 23:49:08.033 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:49:08 compute-0 nova_compute[247516]: 2026-01-21 23:49:08.033 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:49:08 compute-0 nova_compute[247516]: 2026-01-21 23:49:08.034 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:49:08 compute-0 nova_compute[247516]: 2026-01-21 23:49:08.034 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 23:49:08 compute-0 nova_compute[247516]: 2026-01-21 23:49:08.035 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:49:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:49:08 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3224925274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:49:08 compute-0 nova_compute[247516]: 2026-01-21 23:49:08.497 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:49:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:49:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:08.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:49:08 compute-0 nova_compute[247516]: 2026-01-21 23:49:08.700 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 23:49:08 compute-0 nova_compute[247516]: 2026-01-21 23:49:08.703 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5254MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 23:49:08 compute-0 nova_compute[247516]: 2026-01-21 23:49:08.703 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:49:08 compute-0 nova_compute[247516]: 2026-01-21 23:49:08.704 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:49:08 compute-0 nova_compute[247516]: 2026-01-21 23:49:08.810 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 23:49:08 compute-0 nova_compute[247516]: 2026-01-21 23:49:08.811 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 23:49:08 compute-0 nova_compute[247516]: 2026-01-21 23:49:08.836 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:49:09 compute-0 ceph-mon[74318]: pgmap v902: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:09 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3224925274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:49:09 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/469977020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:49:09 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/820746035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:49:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:49:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:49:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:49:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:49:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:49:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:49:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:49:09 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1526132857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:49:09 compute-0 nova_compute[247516]: 2026-01-21 23:49:09.316 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:49:09 compute-0 nova_compute[247516]: 2026-01-21 23:49:09.321 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 23:49:09 compute-0 nova_compute[247516]: 2026-01-21 23:49:09.342 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 23:49:09 compute-0 nova_compute[247516]: 2026-01-21 23:49:09.345 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 23:49:09 compute-0 nova_compute[247516]: 2026-01-21 23:49:09.347 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:49:10 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1526132857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:49:10 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2415135064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:49:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:10.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:10 compute-0 nova_compute[247516]: 2026-01-21 23:49:10.342 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:49:10 compute-0 nova_compute[247516]: 2026-01-21 23:49:10.343 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:49:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:10.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:11 compute-0 ceph-mon[74318]: pgmap v903: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:12.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:49:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:12.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:13 compute-0 ceph-mon[74318]: pgmap v904: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:14.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:14.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:15 compute-0 ceph-mon[74318]: pgmap v905: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:16.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:49:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:16.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:49:17 compute-0 ceph-mon[74318]: pgmap v906: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:49:17.651597) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039357651853, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1496, "num_deletes": 250, "total_data_size": 2544998, "memory_usage": 2594344, "flush_reason": "Manual Compaction"}
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039357669205, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1496503, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19369, "largest_seqno": 20864, "table_properties": {"data_size": 1491255, "index_size": 2516, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13426, "raw_average_key_size": 20, "raw_value_size": 1479650, "raw_average_value_size": 2248, "num_data_blocks": 114, "num_entries": 658, "num_filter_entries": 658, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769039213, "oldest_key_time": 1769039213, "file_creation_time": 1769039357, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 18029 microseconds, and 10326 cpu microseconds.
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:49:17.669649) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1496503 bytes OK
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:49:17.669761) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:49:17.671957) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:49:17.671992) EVENT_LOG_v1 {"time_micros": 1769039357671981, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:49:17.672011) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 2538668, prev total WAL file size 2538668, number of live WAL files 2.
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:49:17.673196) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1461KB)], [44(9469KB)]
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039357673295, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 11193036, "oldest_snapshot_seqno": -1}
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4687 keys, 8351733 bytes, temperature: kUnknown
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039357722419, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8351733, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8320206, "index_size": 18681, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 116567, "raw_average_key_size": 24, "raw_value_size": 8235133, "raw_average_value_size": 1757, "num_data_blocks": 773, "num_entries": 4687, "num_filter_entries": 4687, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769039357, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:49:17.722823) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8351733 bytes
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:49:17.740616) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 227.3 rd, 169.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.2 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(13.1) write-amplify(5.6) OK, records in: 5138, records dropped: 451 output_compression: NoCompression
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:49:17.740673) EVENT_LOG_v1 {"time_micros": 1769039357740649, "job": 22, "event": "compaction_finished", "compaction_time_micros": 49244, "compaction_time_cpu_micros": 23451, "output_level": 6, "num_output_files": 1, "total_output_size": 8351733, "num_input_records": 5138, "num_output_records": 4687, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039357741542, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039357746128, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:49:17.673138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:49:17.746243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:49:17.746251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:49:17.746255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:49:17.746260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:49:17 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:49:17.746269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:49:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:18.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:18 compute-0 ceph-mon[74318]: pgmap v907: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:49:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:18.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:49:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 21 23:49:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 21 23:49:19 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 21 23:49:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:20.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:49:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:20.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:49:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 21 23:49:20 compute-0 ceph-mon[74318]: osdmap e146: 3 total, 3 up, 3 in
Jan 21 23:49:20 compute-0 ceph-mon[74318]: pgmap v909: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 21 23:49:20 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 21 23:49:21 compute-0 ceph-mon[74318]: osdmap e147: 3 total, 3 up, 3 in
Jan 21 23:49:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:22.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 127 B/s rd, 511 B/s wr, 0 op/s
Jan 21 23:49:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:49:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:49:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:22.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:49:22 compute-0 ceph-mon[74318]: pgmap v911: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 127 B/s rd, 511 B/s wr, 0 op/s
Jan 21 23:49:23 compute-0 podman[252127]: 2026-01-21 23:49:23.029422241 +0000 UTC m=+0.123970321 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 23:49:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:49:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:24.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:49:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 127 B/s rd, 511 B/s wr, 0 op/s
Jan 21 23:49:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:49:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:24.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:49:25 compute-0 sudo[252154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:49:25 compute-0 sudo[252154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:25 compute-0 sudo[252154]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:25 compute-0 sudo[252179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:49:25 compute-0 sudo[252179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:25 compute-0 sudo[252179]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:25 compute-0 ceph-mon[74318]: pgmap v912: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 127 B/s rd, 511 B/s wr, 0 op/s
Jan 21 23:49:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 21 23:49:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2295835559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:49:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 21 23:49:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2295835559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:49:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:49:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:26.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:49:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s
Jan 21 23:49:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2295835559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:49:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2295835559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:49:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:26.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:27 compute-0 ceph-mon[74318]: pgmap v913: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s
Jan 21 23:49:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:49:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 21 23:49:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 21 23:49:27 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 21 23:49:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:28.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s
Jan 21 23:49:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:28.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:28 compute-0 ceph-mon[74318]: osdmap e148: 3 total, 3 up, 3 in
Jan 21 23:49:28 compute-0 ceph-mon[74318]: pgmap v915: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s
Jan 21 23:49:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:30.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 319 B/s rd, 638 B/s wr, 1 op/s
Jan 21 23:49:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:30.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:30 compute-0 podman[252207]: 2026-01-21 23:49:30.986132528 +0000 UTC m=+0.088823821 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 21 23:49:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 21 23:49:31 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2480623400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:49:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 21 23:49:31 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2480623400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:49:31 compute-0 ceph-mon[74318]: pgmap v916: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 319 B/s rd, 638 B/s wr, 1 op/s
Jan 21 23:49:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2480623400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:49:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2480623400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:49:31 compute-0 sudo[252227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:49:31 compute-0 sudo[252227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:32 compute-0 sudo[252227]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:32.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:32 compute-0 sudo[252252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:49:32 compute-0 sudo[252252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:32 compute-0 sudo[252252]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:32 compute-0 sudo[252277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:49:32 compute-0 sudo[252277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:32 compute-0 sudo[252277]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:32 compute-0 sudo[252302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:49:32 compute-0 sudo[252302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 204 B/s wr, 14 op/s
Jan 21 23:49:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:49:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:32.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:32 compute-0 sudo[252302]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:49:32 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:49:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:49:32 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:49:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:49:32 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:49:32 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 94d41905-7344-4058-836d-403d73709f2b does not exist
Jan 21 23:49:32 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 6544eec1-881d-4ae6-96de-c0443e06570b does not exist
Jan 21 23:49:32 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev fbfa1169-afb3-4156-9e54-5ec77bf05754 does not exist
Jan 21 23:49:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:49:32 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:49:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:49:32 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:49:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:49:32 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:49:32 compute-0 sudo[252359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:49:32 compute-0 sudo[252359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:32 compute-0 sudo[252359]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:33 compute-0 sudo[252384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:49:33 compute-0 sudo[252384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:33 compute-0 sudo[252384]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:33 compute-0 sudo[252409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:49:33 compute-0 sudo[252409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:33 compute-0 sudo[252409]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:33 compute-0 sudo[252434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:49:33 compute-0 sudo[252434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:33 compute-0 ceph-mon[74318]: pgmap v917: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 204 B/s wr, 14 op/s
Jan 21 23:49:33 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:49:33 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:49:33 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:49:33 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:49:33 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:49:33 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:49:33 compute-0 podman[252502]: 2026-01-21 23:49:33.618739256 +0000 UTC m=+0.057858323 container create 413bef01a46b983e7998aee994c860f1a426a610f949f55d26f1cc7202512a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shtern, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 21 23:49:33 compute-0 systemd[1]: Started libpod-conmon-413bef01a46b983e7998aee994c860f1a426a610f949f55d26f1cc7202512a5e.scope.
Jan 21 23:49:33 compute-0 podman[252502]: 2026-01-21 23:49:33.590551623 +0000 UTC m=+0.029670740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:49:33 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:49:33 compute-0 podman[252502]: 2026-01-21 23:49:33.715194784 +0000 UTC m=+0.154313891 container init 413bef01a46b983e7998aee994c860f1a426a610f949f55d26f1cc7202512a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:49:33 compute-0 podman[252502]: 2026-01-21 23:49:33.726703764 +0000 UTC m=+0.165822831 container start 413bef01a46b983e7998aee994c860f1a426a610f949f55d26f1cc7202512a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shtern, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 21 23:49:33 compute-0 podman[252502]: 2026-01-21 23:49:33.730375939 +0000 UTC m=+0.169494996 container attach 413bef01a46b983e7998aee994c860f1a426a610f949f55d26f1cc7202512a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:49:33 compute-0 lucid_shtern[252518]: 167 167
Jan 21 23:49:33 compute-0 systemd[1]: libpod-413bef01a46b983e7998aee994c860f1a426a610f949f55d26f1cc7202512a5e.scope: Deactivated successfully.
Jan 21 23:49:33 compute-0 podman[252502]: 2026-01-21 23:49:33.734992083 +0000 UTC m=+0.174111140 container died 413bef01a46b983e7998aee994c860f1a426a610f949f55d26f1cc7202512a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 23:49:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3c631461901a4ca305b95a0d6848490640482bf1d88f3b5c4fb807e73afffe4-merged.mount: Deactivated successfully.
Jan 21 23:49:33 compute-0 podman[252502]: 2026-01-21 23:49:33.781311243 +0000 UTC m=+0.220430310 container remove 413bef01a46b983e7998aee994c860f1a426a610f949f55d26f1cc7202512a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:49:33 compute-0 systemd[1]: libpod-conmon-413bef01a46b983e7998aee994c860f1a426a610f949f55d26f1cc7202512a5e.scope: Deactivated successfully.
Jan 21 23:49:34 compute-0 podman[252541]: 2026-01-21 23:49:34.020535769 +0000 UTC m=+0.074313406 container create 630670b21cf2f0e45ddeca555e4bce86104c6e6838cf865d0b83e02f8897b954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_aryabhata, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 23:49:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:34.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:34 compute-0 systemd[1]: Started libpod-conmon-630670b21cf2f0e45ddeca555e4bce86104c6e6838cf865d0b83e02f8897b954.scope.
Jan 21 23:49:34 compute-0 podman[252541]: 2026-01-21 23:49:33.994304119 +0000 UTC m=+0.048081806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:49:34 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a305cf599bdbbee83bed414f6f2984efea61a04ad01cb4fc231a3e88cb89dc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a305cf599bdbbee83bed414f6f2984efea61a04ad01cb4fc231a3e88cb89dc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a305cf599bdbbee83bed414f6f2984efea61a04ad01cb4fc231a3e88cb89dc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a305cf599bdbbee83bed414f6f2984efea61a04ad01cb4fc231a3e88cb89dc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a305cf599bdbbee83bed414f6f2984efea61a04ad01cb4fc231a3e88cb89dc6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:49:34 compute-0 podman[252541]: 2026-01-21 23:49:34.118998481 +0000 UTC m=+0.172776088 container init 630670b21cf2f0e45ddeca555e4bce86104c6e6838cf865d0b83e02f8897b954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_aryabhata, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 21 23:49:34 compute-0 podman[252541]: 2026-01-21 23:49:34.133518206 +0000 UTC m=+0.187295813 container start 630670b21cf2f0e45ddeca555e4bce86104c6e6838cf865d0b83e02f8897b954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 23:49:34 compute-0 podman[252541]: 2026-01-21 23:49:34.137072957 +0000 UTC m=+0.190850714 container attach 630670b21cf2f0e45ddeca555e4bce86104c6e6838cf865d0b83e02f8897b954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 21 23:49:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 204 B/s wr, 14 op/s
Jan 21 23:49:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:34.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:34 compute-0 adoring_aryabhata[252558]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:49:34 compute-0 adoring_aryabhata[252558]: --> relative data size: 1.0
Jan 21 23:49:34 compute-0 adoring_aryabhata[252558]: --> All data devices are unavailable
Jan 21 23:49:35 compute-0 systemd[1]: libpod-630670b21cf2f0e45ddeca555e4bce86104c6e6838cf865d0b83e02f8897b954.scope: Deactivated successfully.
Jan 21 23:49:35 compute-0 podman[252541]: 2026-01-21 23:49:35.008296661 +0000 UTC m=+1.062074298 container died 630670b21cf2f0e45ddeca555e4bce86104c6e6838cf865d0b83e02f8897b954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_aryabhata, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:49:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a305cf599bdbbee83bed414f6f2984efea61a04ad01cb4fc231a3e88cb89dc6-merged.mount: Deactivated successfully.
Jan 21 23:49:35 compute-0 podman[252541]: 2026-01-21 23:49:35.084147215 +0000 UTC m=+1.137924842 container remove 630670b21cf2f0e45ddeca555e4bce86104c6e6838cf865d0b83e02f8897b954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_aryabhata, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 21 23:49:35 compute-0 systemd[1]: libpod-conmon-630670b21cf2f0e45ddeca555e4bce86104c6e6838cf865d0b83e02f8897b954.scope: Deactivated successfully.
Jan 21 23:49:35 compute-0 sudo[252434]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:35 compute-0 sudo[252585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:49:35 compute-0 sudo[252585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:35 compute-0 sudo[252585]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:35 compute-0 sudo[252610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:49:35 compute-0 sudo[252610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:35 compute-0 sudo[252610]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:35 compute-0 sudo[252635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:49:35 compute-0 sudo[252635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:35 compute-0 sudo[252635]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:35 compute-0 sudo[252661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:49:35 compute-0 ceph-mon[74318]: pgmap v918: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 204 B/s wr, 14 op/s
Jan 21 23:49:35 compute-0 sudo[252661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:35 compute-0 podman[252725]: 2026-01-21 23:49:35.838934267 +0000 UTC m=+0.060094882 container create 807b9a3895d5ae47bbe158fa6f8b26eddb9f7afe822b6ce473d69037bf52a056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:49:35 compute-0 systemd[1]: Started libpod-conmon-807b9a3895d5ae47bbe158fa6f8b26eddb9f7afe822b6ce473d69037bf52a056.scope.
Jan 21 23:49:35 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:49:35 compute-0 podman[252725]: 2026-01-21 23:49:35.819863869 +0000 UTC m=+0.041024464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:49:35 compute-0 podman[252725]: 2026-01-21 23:49:35.927045114 +0000 UTC m=+0.148205719 container init 807b9a3895d5ae47bbe158fa6f8b26eddb9f7afe822b6ce473d69037bf52a056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 21 23:49:35 compute-0 podman[252725]: 2026-01-21 23:49:35.941429374 +0000 UTC m=+0.162589999 container start 807b9a3895d5ae47bbe158fa6f8b26eddb9f7afe822b6ce473d69037bf52a056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chaplygin, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:49:35 compute-0 podman[252725]: 2026-01-21 23:49:35.945548734 +0000 UTC m=+0.166709319 container attach 807b9a3895d5ae47bbe158fa6f8b26eddb9f7afe822b6ce473d69037bf52a056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Jan 21 23:49:35 compute-0 determined_chaplygin[252741]: 167 167
Jan 21 23:49:35 compute-0 systemd[1]: libpod-807b9a3895d5ae47bbe158fa6f8b26eddb9f7afe822b6ce473d69037bf52a056.scope: Deactivated successfully.
Jan 21 23:49:35 compute-0 conmon[252741]: conmon 807b9a3895d5ae47bbe1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-807b9a3895d5ae47bbe158fa6f8b26eddb9f7afe822b6ce473d69037bf52a056.scope/container/memory.events
Jan 21 23:49:35 compute-0 podman[252725]: 2026-01-21 23:49:35.948412633 +0000 UTC m=+0.169573228 container died 807b9a3895d5ae47bbe158fa6f8b26eddb9f7afe822b6ce473d69037bf52a056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chaplygin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:49:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a19c498f511b52b2e0d6da3618dcc6185fd089e190d0a9756ecae2801cdbab40-merged.mount: Deactivated successfully.
Jan 21 23:49:35 compute-0 podman[252725]: 2026-01-21 23:49:35.996337022 +0000 UTC m=+0.217497637 container remove 807b9a3895d5ae47bbe158fa6f8b26eddb9f7afe822b6ce473d69037bf52a056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 21 23:49:36 compute-0 systemd[1]: libpod-conmon-807b9a3895d5ae47bbe158fa6f8b26eddb9f7afe822b6ce473d69037bf52a056.scope: Deactivated successfully.
Jan 21 23:49:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:36.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:36 compute-0 podman[252764]: 2026-01-21 23:49:36.217021349 +0000 UTC m=+0.078176158 container create 5463191ded872bf369b0e8464f5eee5c1d0ba1fcaf53882530c662d44b5156b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:49:36 compute-0 systemd[1]: Started libpod-conmon-5463191ded872bf369b0e8464f5eee5c1d0ba1fcaf53882530c662d44b5156b0.scope.
Jan 21 23:49:36 compute-0 podman[252764]: 2026-01-21 23:49:36.18574621 +0000 UTC m=+0.046901079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:49:36 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90fcd2653adad536764249a21a66009c11325a0dd04bf2aa87de8c0d2ae427d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90fcd2653adad536764249a21a66009c11325a0dd04bf2aa87de8c0d2ae427d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90fcd2653adad536764249a21a66009c11325a0dd04bf2aa87de8c0d2ae427d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90fcd2653adad536764249a21a66009c11325a0dd04bf2aa87de8c0d2ae427d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:49:36 compute-0 podman[252764]: 2026-01-21 23:49:36.328793987 +0000 UTC m=+0.189948836 container init 5463191ded872bf369b0e8464f5eee5c1d0ba1fcaf53882530c662d44b5156b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 21 23:49:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 307 B/s wr, 16 op/s
Jan 21 23:49:36 compute-0 podman[252764]: 2026-01-21 23:49:36.352705085 +0000 UTC m=+0.213859894 container start 5463191ded872bf369b0e8464f5eee5c1d0ba1fcaf53882530c662d44b5156b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 21 23:49:36 compute-0 podman[252764]: 2026-01-21 23:49:36.357689421 +0000 UTC m=+0.218844240 container attach 5463191ded872bf369b0e8464f5eee5c1d0ba1fcaf53882530c662d44b5156b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 21 23:49:36 compute-0 ceph-mon[74318]: pgmap v919: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 307 B/s wr, 16 op/s
Jan 21 23:49:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:49:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:36.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]: {
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:     "1": [
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:         {
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:             "devices": [
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:                 "/dev/loop3"
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:             ],
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:             "lv_name": "ceph_lv0",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:             "lv_size": "7511998464",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:             "name": "ceph_lv0",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:             "tags": {
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:                 "ceph.cluster_name": "ceph",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:                 "ceph.crush_device_class": "",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:                 "ceph.encrypted": "0",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:                 "ceph.osd_id": "1",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:                 "ceph.type": "block",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:                 "ceph.vdo": "0"
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:             },
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:             "type": "block",
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:             "vg_name": "ceph_vg0"
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:         }
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]:     ]
Jan 21 23:49:37 compute-0 mystifying_sammet[252781]: }
Jan 21 23:49:37 compute-0 systemd[1]: libpod-5463191ded872bf369b0e8464f5eee5c1d0ba1fcaf53882530c662d44b5156b0.scope: Deactivated successfully.
Jan 21 23:49:37 compute-0 podman[252764]: 2026-01-21 23:49:37.276873737 +0000 UTC m=+1.138028506 container died 5463191ded872bf369b0e8464f5eee5c1d0ba1fcaf53882530c662d44b5156b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 21 23:49:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-90fcd2653adad536764249a21a66009c11325a0dd04bf2aa87de8c0d2ae427d1-merged.mount: Deactivated successfully.
Jan 21 23:49:37 compute-0 podman[252764]: 2026-01-21 23:49:37.356690015 +0000 UTC m=+1.217844824 container remove 5463191ded872bf369b0e8464f5eee5c1d0ba1fcaf53882530c662d44b5156b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:49:37 compute-0 systemd[1]: libpod-conmon-5463191ded872bf369b0e8464f5eee5c1d0ba1fcaf53882530c662d44b5156b0.scope: Deactivated successfully.
Jan 21 23:49:37 compute-0 sudo[252661]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:37 compute-0 sudo[252806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:49:37 compute-0 sudo[252806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:37 compute-0 sudo[252806]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:37 compute-0 sudo[252831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:49:37 compute-0 sudo[252831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:37 compute-0 sudo[252831]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:37 compute-0 sudo[252856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:49:37 compute-0 sudo[252856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:37 compute-0 sudo[252856]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:49:37 compute-0 sudo[252881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:49:37 compute-0 sudo[252881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:38.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:38 compute-0 podman[252947]: 2026-01-21 23:49:38.157895879 +0000 UTC m=+0.050513182 container create 51770768e619dc2f5b8b661de65ff744bd5da14c0502ab80a23a309861ffbada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 21 23:49:38 compute-0 systemd[1]: Started libpod-conmon-51770768e619dc2f5b8b661de65ff744bd5da14c0502ab80a23a309861ffbada.scope.
Jan 21 23:49:38 compute-0 podman[252947]: 2026-01-21 23:49:38.137585954 +0000 UTC m=+0.030203177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:49:38 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:49:38 compute-0 podman[252947]: 2026-01-21 23:49:38.2528374 +0000 UTC m=+0.145454663 container init 51770768e619dc2f5b8b661de65ff744bd5da14c0502ab80a23a309861ffbada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 23:49:38 compute-0 podman[252947]: 2026-01-21 23:49:38.263890266 +0000 UTC m=+0.156507479 container start 51770768e619dc2f5b8b661de65ff744bd5da14c0502ab80a23a309861ffbada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:49:38 compute-0 podman[252947]: 2026-01-21 23:49:38.268093708 +0000 UTC m=+0.160710961 container attach 51770768e619dc2f5b8b661de65ff744bd5da14c0502ab80a23a309861ffbada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:49:38 compute-0 admiring_nash[252963]: 167 167
Jan 21 23:49:38 compute-0 systemd[1]: libpod-51770768e619dc2f5b8b661de65ff744bd5da14c0502ab80a23a309861ffbada.scope: Deactivated successfully.
Jan 21 23:49:38 compute-0 conmon[252963]: conmon 51770768e619dc2f5b8b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-51770768e619dc2f5b8b661de65ff744bd5da14c0502ab80a23a309861ffbada.scope/container/memory.events
Jan 21 23:49:38 compute-0 podman[252947]: 2026-01-21 23:49:38.271389501 +0000 UTC m=+0.164006714 container died 51770768e619dc2f5b8b661de65ff744bd5da14c0502ab80a23a309861ffbada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 21 23:49:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9561d018451ee485017b0432f2ca618e2ba12c250bb1e10cf57a99a1523595e3-merged.mount: Deactivated successfully.
Jan 21 23:49:38 compute-0 podman[252947]: 2026-01-21 23:49:38.318122713 +0000 UTC m=+0.210739926 container remove 51770768e619dc2f5b8b661de65ff744bd5da14c0502ab80a23a309861ffbada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:49:38 compute-0 systemd[1]: libpod-conmon-51770768e619dc2f5b8b661de65ff744bd5da14c0502ab80a23a309861ffbada.scope: Deactivated successfully.
Jan 21 23:49:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 287 B/s wr, 15 op/s
Jan 21 23:49:38 compute-0 podman[252986]: 2026-01-21 23:49:38.469894663 +0000 UTC m=+0.041092696 container create f2fc92f1353b17e38834c9a518a430be1af3bede0f94674c20c17e0d98e130f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 21 23:49:38 compute-0 systemd[1]: Started libpod-conmon-f2fc92f1353b17e38834c9a518a430be1af3bede0f94674c20c17e0d98e130f3.scope.
Jan 21 23:49:38 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:49:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7462c61df568b0f0ef4af6e10b68321e508e2fc06b1cae56013cfabe9716613e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:49:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7462c61df568b0f0ef4af6e10b68321e508e2fc06b1cae56013cfabe9716613e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:49:38 compute-0 podman[252986]: 2026-01-21 23:49:38.452402696 +0000 UTC m=+0.023600719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:49:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7462c61df568b0f0ef4af6e10b68321e508e2fc06b1cae56013cfabe9716613e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:49:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7462c61df568b0f0ef4af6e10b68321e508e2fc06b1cae56013cfabe9716613e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:49:38 compute-0 podman[252986]: 2026-01-21 23:49:38.558609019 +0000 UTC m=+0.129807072 container init f2fc92f1353b17e38834c9a518a430be1af3bede0f94674c20c17e0d98e130f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 21 23:49:38 compute-0 podman[252986]: 2026-01-21 23:49:38.57138394 +0000 UTC m=+0.142581983 container start f2fc92f1353b17e38834c9a518a430be1af3bede0f94674c20c17e0d98e130f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 21 23:49:38 compute-0 podman[252986]: 2026-01-21 23:49:38.57586411 +0000 UTC m=+0.147062183 container attach f2fc92f1353b17e38834c9a518a430be1af3bede0f94674c20c17e0d98e130f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 23:49:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:38.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:49:39
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['volumes', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'images', 'backups']
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:49:39 compute-0 mystifying_fermat[253003]: {
Jan 21 23:49:39 compute-0 mystifying_fermat[253003]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:49:39 compute-0 mystifying_fermat[253003]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:49:39 compute-0 mystifying_fermat[253003]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:49:39 compute-0 mystifying_fermat[253003]:         "osd_id": 1,
Jan 21 23:49:39 compute-0 mystifying_fermat[253003]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:49:39 compute-0 mystifying_fermat[253003]:         "type": "bluestore"
Jan 21 23:49:39 compute-0 mystifying_fermat[253003]:     }
Jan 21 23:49:39 compute-0 mystifying_fermat[253003]: }
Jan 21 23:49:39 compute-0 ceph-mon[74318]: pgmap v920: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 287 B/s wr, 15 op/s
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:49:39 compute-0 systemd[1]: libpod-f2fc92f1353b17e38834c9a518a430be1af3bede0f94674c20c17e0d98e130f3.scope: Deactivated successfully.
Jan 21 23:49:39 compute-0 podman[252986]: 2026-01-21 23:49:39.453859866 +0000 UTC m=+1.025057909 container died f2fc92f1353b17e38834c9a518a430be1af3bede0f94674c20c17e0d98e130f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:49:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7462c61df568b0f0ef4af6e10b68321e508e2fc06b1cae56013cfabe9716613e-merged.mount: Deactivated successfully.
Jan 21 23:49:39 compute-0 podman[252986]: 2026-01-21 23:49:39.520362978 +0000 UTC m=+1.091560991 container remove f2fc92f1353b17e38834c9a518a430be1af3bede0f94674c20c17e0d98e130f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:49:39 compute-0 systemd[1]: libpod-conmon-f2fc92f1353b17e38834c9a518a430be1af3bede0f94674c20c17e0d98e130f3.scope: Deactivated successfully.
Jan 21 23:49:39 compute-0 sudo[252881]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:49:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:49:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:49:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 7c81c8df-5c45-4560-8fc1-ceee30bdb6dc does not exist
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 13da3d2c-97d6-4051-96f3-e6ca19aa22a3 does not exist
Jan 21 23:49:39 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 3d76de71-e345-4bbf-a7eb-4eb31c7ba771 does not exist
Jan 21 23:49:39 compute-0 sudo[253039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:49:39 compute-0 sudo[253039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:39 compute-0 sudo[253039]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:39 compute-0 sudo[253064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:49:39 compute-0 sudo[253064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:39 compute-0 sudo[253064]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:40.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 21 23:49:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:49:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:49:40 compute-0 ceph-mon[74318]: pgmap v921: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 21 23:49:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:40.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:42.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 21 23:49:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:49:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:42.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:43 compute-0 ceph-mon[74318]: pgmap v922: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 21 23:49:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:44.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Jan 21 23:49:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:49:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:44.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:49:45 compute-0 sudo[253092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:49:45 compute-0 sudo[253092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:45 compute-0 sudo[253092]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:45 compute-0 ceph-mon[74318]: pgmap v923: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Jan 21 23:49:45 compute-0 sudo[253118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:49:45 compute-0 sudo[253118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:49:45 compute-0 sudo[253118]: pam_unix(sudo:session): session closed for user root
Jan 21 23:49:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:49:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:46.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:49:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Jan 21 23:49:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:46.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:47 compute-0 ceph-mon[74318]: pgmap v924: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Jan 21 23:49:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:49:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:48.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:48.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:49:48.744 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:49:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:49:48.746 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:49:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:49:48.747 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:49:49 compute-0 ceph-mon[74318]: pgmap v925: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:50.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:50.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:51 compute-0 ceph-mon[74318]: pgmap v926: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:52.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:49:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:52.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:53 compute-0 ceph-mon[74318]: pgmap v927: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:54 compute-0 podman[253147]: 2026-01-21 23:49:54.044625248 +0000 UTC m=+0.145324790 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 23:49:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:54.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:49:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:54.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:55 compute-0 ceph-mon[74318]: pgmap v928: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:55 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:49:55.912 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 23:49:55 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:49:55.913 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 23:49:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:56.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:56 compute-0 ceph-mon[74318]: pgmap v929: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:56.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:49:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:49:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:49:58.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:49:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:49:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:49:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:49:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:49:58.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:49:59 compute-0 ceph-mon[74318]: pgmap v930: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:00 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 21 23:50:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:50:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:00.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:50:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:00 compute-0 ceph-mon[74318]: overall HEALTH_OK
Jan 21 23:50:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:50:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:00.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:50:01 compute-0 ceph-mon[74318]: pgmap v931: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:01 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:50:01.916 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 23:50:01 compute-0 podman[253179]: 2026-01-21 23:50:01.94842592 +0000 UTC m=+0.058319766 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 23:50:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:02.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 21 23:50:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:50:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:02.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:03 compute-0 ceph-mon[74318]: pgmap v932: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 21 23:50:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:04.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 21 23:50:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:04.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:04 compute-0 nova_compute[247516]: 2026-01-21 23:50:04.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:50:04 compute-0 nova_compute[247516]: 2026-01-21 23:50:04.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 21 23:50:05 compute-0 nova_compute[247516]: 2026-01-21 23:50:05.018 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 21 23:50:05 compute-0 nova_compute[247516]: 2026-01-21 23:50:05.020 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:50:05 compute-0 nova_compute[247516]: 2026-01-21 23:50:05.021 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 21 23:50:05 compute-0 nova_compute[247516]: 2026-01-21 23:50:05.031 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:50:05 compute-0 ceph-mon[74318]: pgmap v933: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 21 23:50:05 compute-0 sudo[253201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:50:05 compute-0 sudo[253201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:05 compute-0 sudo[253201]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:05 compute-0 sudo[253226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:50:05 compute-0 sudo[253226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:05 compute-0 sudo[253226]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:06 compute-0 nova_compute[247516]: 2026-01-21 23:50:06.038 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:50:06 compute-0 nova_compute[247516]: 2026-01-21 23:50:06.038 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 23:50:06 compute-0 nova_compute[247516]: 2026-01-21 23:50:06.039 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 23:50:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:06.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:06 compute-0 nova_compute[247516]: 2026-01-21 23:50:06.118 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 23:50:06 compute-0 nova_compute[247516]: 2026-01-21 23:50:06.118 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:50:06 compute-0 nova_compute[247516]: 2026-01-21 23:50:06.119 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 23:50:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 21 23:50:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:50:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:06.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:50:06 compute-0 nova_compute[247516]: 2026-01-21 23:50:06.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:50:07 compute-0 ceph-mon[74318]: pgmap v934: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 21 23:50:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:50:07 compute-0 nova_compute[247516]: 2026-01-21 23:50:07.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:50:07 compute-0 nova_compute[247516]: 2026-01-21 23:50:07.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:50:07 compute-0 nova_compute[247516]: 2026-01-21 23:50:07.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:50:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:50:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:08.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:50:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 7 op/s
Jan 21 23:50:08 compute-0 ceph-mon[74318]: pgmap v935: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 7 op/s
Jan 21 23:50:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:50:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:08.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:50:08 compute-0 nova_compute[247516]: 2026-01-21 23:50:08.987 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:50:08 compute-0 nova_compute[247516]: 2026-01-21 23:50:08.990 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:50:09 compute-0 nova_compute[247516]: 2026-01-21 23:50:09.024 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:50:09 compute-0 nova_compute[247516]: 2026-01-21 23:50:09.025 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:50:09 compute-0 nova_compute[247516]: 2026-01-21 23:50:09.025 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:50:09 compute-0 nova_compute[247516]: 2026-01-21 23:50:09.026 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 23:50:09 compute-0 nova_compute[247516]: 2026-01-21 23:50:09.026 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:50:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:50:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:50:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:50:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:50:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:50:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:50:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:50:09 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/749897222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:50:09 compute-0 nova_compute[247516]: 2026-01-21 23:50:09.528 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:50:09 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/749897222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:50:09 compute-0 nova_compute[247516]: 2026-01-21 23:50:09.777 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 23:50:09 compute-0 nova_compute[247516]: 2026-01-21 23:50:09.779 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5233MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 23:50:09 compute-0 nova_compute[247516]: 2026-01-21 23:50:09.780 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:50:09 compute-0 nova_compute[247516]: 2026-01-21 23:50:09.780 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:50:09 compute-0 nova_compute[247516]: 2026-01-21 23:50:09.862 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 23:50:09 compute-0 nova_compute[247516]: 2026-01-21 23:50:09.863 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 23:50:09 compute-0 nova_compute[247516]: 2026-01-21 23:50:09.878 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:50:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:10.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:50:10 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1440435546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:50:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 78 MiB data, 204 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 MiB/s wr, 39 op/s
Jan 21 23:50:10 compute-0 nova_compute[247516]: 2026-01-21 23:50:10.381 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:50:10 compute-0 nova_compute[247516]: 2026-01-21 23:50:10.390 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 23:50:10 compute-0 nova_compute[247516]: 2026-01-21 23:50:10.423 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 23:50:10 compute-0 nova_compute[247516]: 2026-01-21 23:50:10.426 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 23:50:10 compute-0 nova_compute[247516]: 2026-01-21 23:50:10.427 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:50:10 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2981860628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:50:10 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1440435546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:50:10 compute-0 ceph-mon[74318]: pgmap v936: 305 pgs: 305 active+clean; 78 MiB data, 204 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 MiB/s wr, 39 op/s
Jan 21 23:50:10 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/279718796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:50:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:10.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:11 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3085973034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:50:11 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1098636550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:50:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:12.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 88 MiB data, 206 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 21 23:50:12 compute-0 ceph-mon[74318]: pgmap v937: 305 pgs: 305 active+clean; 88 MiB data, 206 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 21 23:50:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:50:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:50:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:12.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:50:13 compute-0 nova_compute[247516]: 2026-01-21 23:50:13.429 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:50:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:14.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 88 MiB data, 206 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 21 23:50:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:50:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:14.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:50:15 compute-0 ceph-mon[74318]: pgmap v938: 305 pgs: 305 active+clean; 88 MiB data, 206 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 21 23:50:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:50:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:16.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:50:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 21 23:50:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:16.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:17 compute-0 ceph-mon[74318]: pgmap v939: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 21 23:50:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:50:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 21 23:50:18 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2206084345' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:50:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 21 23:50:18 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2206084345' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:50:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:18.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 21 23:50:18 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2206084345' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:50:18 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2206084345' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:50:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:18.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:19 compute-0 ceph-mon[74318]: pgmap v940: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 21 23:50:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:20.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 51 MiB data, 197 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 21 23:50:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:50:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:20.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:50:21 compute-0 ceph-mon[74318]: pgmap v941: 305 pgs: 305 active+clean; 51 MiB data, 197 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 21 23:50:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:22.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 218 KiB/s wr, 17 op/s
Jan 21 23:50:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:50:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:22.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:23 compute-0 ceph-mon[74318]: pgmap v942: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 218 KiB/s wr, 17 op/s
Jan 21 23:50:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:24.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 682 B/s wr, 16 op/s
Jan 21 23:50:24 compute-0 ceph-mon[74318]: pgmap v943: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 682 B/s wr, 16 op/s
Jan 21 23:50:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:24.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:25 compute-0 podman[253305]: 2026-01-21 23:50:25.049109763 +0000 UTC m=+0.146672032 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 21 23:50:25 compute-0 sudo[253332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:50:25 compute-0 sudo[253332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:25 compute-0 sudo[253332]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:25 compute-0 sudo[253357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:50:25 compute-0 sudo[253357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:25 compute-0 sudo[253357]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:50:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:26.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:50:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/993256907' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:50:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/993256907' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:50:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 682 B/s wr, 16 op/s
Jan 21 23:50:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:50:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:26.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:50:27 compute-0 ceph-mon[74318]: pgmap v944: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 682 B/s wr, 16 op/s
Jan 21 23:50:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:27.679946) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039427680014, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 884, "num_deletes": 255, "total_data_size": 1261515, "memory_usage": 1285056, "flush_reason": "Manual Compaction"}
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039427694032, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1247482, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20865, "largest_seqno": 21748, "table_properties": {"data_size": 1243129, "index_size": 2005, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9468, "raw_average_key_size": 18, "raw_value_size": 1234149, "raw_average_value_size": 2443, "num_data_blocks": 90, "num_entries": 505, "num_filter_entries": 505, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769039358, "oldest_key_time": 1769039358, "file_creation_time": 1769039427, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 14184 microseconds, and 4676 cpu microseconds.
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:27.694128) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1247482 bytes OK
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:27.694156) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:27.695905) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:27.695930) EVENT_LOG_v1 {"time_micros": 1769039427695922, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:27.695952) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1257268, prev total WAL file size 1257268, number of live WAL files 2.
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:27.697008) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1218KB)], [47(8155KB)]
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039427697183, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9599215, "oldest_snapshot_seqno": -1}
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4665 keys, 9455515 bytes, temperature: kUnknown
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039427811746, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 9455515, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9422596, "index_size": 20137, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11717, "raw_key_size": 117292, "raw_average_key_size": 25, "raw_value_size": 9336339, "raw_average_value_size": 2001, "num_data_blocks": 832, "num_entries": 4665, "num_filter_entries": 4665, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769039427, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:27.812221) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 9455515 bytes
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:27.814550) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 83.9 rd, 82.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.0 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(15.3) write-amplify(7.6) OK, records in: 5192, records dropped: 527 output_compression: NoCompression
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:27.814603) EVENT_LOG_v1 {"time_micros": 1769039427814589, "job": 24, "event": "compaction_finished", "compaction_time_micros": 114432, "compaction_time_cpu_micros": 37327, "output_level": 6, "num_output_files": 1, "total_output_size": 9455515, "num_input_records": 5192, "num_output_records": 4665, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039427815322, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039427818243, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:27.696826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:27.818458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:27.818469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:27.818473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:27.818477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:50:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:27.818480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:50:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:28.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 21 23:50:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:28.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:29.270750) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039429270794, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 264, "num_deletes": 251, "total_data_size": 44380, "memory_usage": 50904, "flush_reason": "Manual Compaction"}
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039429273482, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 44390, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21749, "largest_seqno": 22012, "table_properties": {"data_size": 42527, "index_size": 92, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4764, "raw_average_key_size": 18, "raw_value_size": 38985, "raw_average_value_size": 149, "num_data_blocks": 4, "num_entries": 261, "num_filter_entries": 261, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769039429, "oldest_key_time": 1769039429, "file_creation_time": 1769039429, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 2783 microseconds, and 1277 cpu microseconds.
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:29.273532) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 44390 bytes OK
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:29.273591) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:29.275328) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:29.275355) EVENT_LOG_v1 {"time_micros": 1769039429275348, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:29.275378) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 42341, prev total WAL file size 42341, number of live WAL files 2.
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:29.277358) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(43KB)], [50(9233KB)]
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039429277447, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 9499905, "oldest_snapshot_seqno": -1}
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4417 keys, 7476415 bytes, temperature: kUnknown
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039429353251, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 7476415, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7446880, "index_size": 17393, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 112829, "raw_average_key_size": 25, "raw_value_size": 7366588, "raw_average_value_size": 1667, "num_data_blocks": 708, "num_entries": 4417, "num_filter_entries": 4417, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769039429, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:29.353657) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 7476415 bytes
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:29.355800) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.1 rd, 98.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 9.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(382.4) write-amplify(168.4) OK, records in: 4926, records dropped: 509 output_compression: NoCompression
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:29.355823) EVENT_LOG_v1 {"time_micros": 1769039429355812, "job": 26, "event": "compaction_finished", "compaction_time_micros": 75925, "compaction_time_cpu_micros": 25717, "output_level": 6, "num_output_files": 1, "total_output_size": 7476415, "num_input_records": 4926, "num_output_records": 4417, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039429355986, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039429357694, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:29.277163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:29.357733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:29.357739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:29.357741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:29.357743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:50:29 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:50:29.357745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:50:29 compute-0 ceph-mon[74318]: pgmap v945: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 21 23:50:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:30.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 21 23:50:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:30.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:31 compute-0 ceph-mon[74318]: pgmap v946: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 21 23:50:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:32.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Jan 21 23:50:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:50:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:32.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:32 compute-0 podman[253386]: 2026-01-21 23:50:32.975080618 +0000 UTC m=+0.086949132 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 23:50:33 compute-0 ceph-mon[74318]: pgmap v947: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Jan 21 23:50:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:34.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:34.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:35 compute-0 ceph-mon[74318]: pgmap v948: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:36.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:36 compute-0 ceph-mon[74318]: pgmap v949: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:36.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:50:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:38.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:38.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:50:39
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', '.mgr', '.rgw.root', 'backups']
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:50:39 compute-0 ceph-mon[74318]: pgmap v950: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:50:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:50:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:40.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:40 compute-0 sudo[253410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:50:40 compute-0 sudo[253410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:40 compute-0 sudo[253410]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:40 compute-0 sudo[253435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:50:40 compute-0 sudo[253435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:40 compute-0 sudo[253435]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:40 compute-0 sudo[253460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:50:40 compute-0 sudo[253460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:40 compute-0 sudo[253460]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:40 compute-0 sudo[253485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 21 23:50:40 compute-0 sudo[253485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:40.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:41 compute-0 podman[253580]: 2026-01-21 23:50:41.037852015 +0000 UTC m=+0.078620812 container exec 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:50:41 compute-0 podman[253580]: 2026-01-21 23:50:41.1830817 +0000 UTC m=+0.223850467 container exec_died 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 23:50:41 compute-0 ceph-mon[74318]: pgmap v951: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:50:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:50:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:50:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:50:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:50:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:42.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:50:42 compute-0 podman[253740]: 2026-01-21 23:50:42.151124364 +0000 UTC m=+0.095222941 container exec fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 21 23:50:42 compute-0 podman[253740]: 2026-01-21 23:50:42.177093897 +0000 UTC m=+0.121192484 container exec_died fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 21 23:50:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:42 compute-0 podman[253803]: 2026-01-21 23:50:42.425371487 +0000 UTC m=+0.054845507 container exec 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, vendor=Red Hat, Inc., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-type=git, architecture=x86_64, description=keepalived for Ceph, io.buildah.version=1.28.2, io.openshift.expose-services=, release=1793)
Jan 21 23:50:42 compute-0 podman[253803]: 2026-01-21 23:50:42.439937473 +0000 UTC m=+0.069411493 container exec_died 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, version=2.2.4, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, name=keepalived)
Jan 21 23:50:42 compute-0 sudo[253485]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:50:42 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:50:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:50:42 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:50:42 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:50:42 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:50:42 compute-0 ceph-mon[74318]: pgmap v952: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:42 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:50:42 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:50:42 compute-0 sudo[253837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:50:42 compute-0 sudo[253837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:42 compute-0 sudo[253837]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:50:42 compute-0 sudo[253862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:50:42 compute-0 sudo[253862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:42 compute-0 sudo[253862]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:42 compute-0 sudo[253887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:50:42 compute-0 sudo[253887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:42 compute-0 sudo[253887]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:42.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:42 compute-0 sudo[253912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:50:42 compute-0 sudo[253912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:43 compute-0 sudo[253912]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:50:43 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:50:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:50:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:50:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:50:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:50:43 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 07749e80-b243-4d23-81ec-9b45695cb2c0 does not exist
Jan 21 23:50:43 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 74e2c002-bdd4-430d-bf98-d72779f32990 does not exist
Jan 21 23:50:43 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev f4fc54bd-c682-43ff-aab5-5647ffcbc324 does not exist
Jan 21 23:50:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:50:43 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:50:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:50:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:50:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:50:43 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:50:43 compute-0 sudo[253969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:50:43 compute-0 sudo[253969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:43 compute-0 sudo[253969]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:43 compute-0 sudo[253994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:50:43 compute-0 sudo[253994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:43 compute-0 sudo[253994]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:43 compute-0 sudo[254019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:50:43 compute-0 sudo[254019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:43 compute-0 sudo[254019]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:43 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:50:43 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:50:43 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:50:43 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:50:43 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:50:43 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:50:43 compute-0 sudo[254044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:50:43 compute-0 sudo[254044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:50:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:44.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:50:44 compute-0 podman[254110]: 2026-01-21 23:50:44.148132711 +0000 UTC m=+0.068449963 container create 53e8c4f62052af27ecda79af1a5880acb82c8e4e2bb844f85f19a4d134613e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_johnson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 21 23:50:44 compute-0 systemd[1]: Started libpod-conmon-53e8c4f62052af27ecda79af1a5880acb82c8e4e2bb844f85f19a4d134613e11.scope.
Jan 21 23:50:44 compute-0 podman[254110]: 2026-01-21 23:50:44.122285322 +0000 UTC m=+0.042602654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:50:44 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:50:44 compute-0 podman[254110]: 2026-01-21 23:50:44.247855732 +0000 UTC m=+0.168173054 container init 53e8c4f62052af27ecda79af1a5880acb82c8e4e2bb844f85f19a4d134613e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:50:44 compute-0 podman[254110]: 2026-01-21 23:50:44.261112237 +0000 UTC m=+0.181429519 container start 53e8c4f62052af27ecda79af1a5880acb82c8e4e2bb844f85f19a4d134613e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_johnson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:50:44 compute-0 podman[254110]: 2026-01-21 23:50:44.265976239 +0000 UTC m=+0.186293561 container attach 53e8c4f62052af27ecda79af1a5880acb82c8e4e2bb844f85f19a4d134613e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_johnson, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:50:44 compute-0 sharp_johnson[254126]: 167 167
Jan 21 23:50:44 compute-0 systemd[1]: libpod-53e8c4f62052af27ecda79af1a5880acb82c8e4e2bb844f85f19a4d134613e11.scope: Deactivated successfully.
Jan 21 23:50:44 compute-0 podman[254110]: 2026-01-21 23:50:44.269598142 +0000 UTC m=+0.189915444 container died 53e8c4f62052af27ecda79af1a5880acb82c8e4e2bb844f85f19a4d134613e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_johnson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:50:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-94d9ea540b0936637c53425b34b929642c2b904da8bc9e6919bae38d449bd675-merged.mount: Deactivated successfully.
Jan 21 23:50:44 compute-0 podman[254110]: 2026-01-21 23:50:44.320385202 +0000 UTC m=+0.240702464 container remove 53e8c4f62052af27ecda79af1a5880acb82c8e4e2bb844f85f19a4d134613e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:50:44 compute-0 systemd[1]: libpod-conmon-53e8c4f62052af27ecda79af1a5880acb82c8e4e2bb844f85f19a4d134613e11.scope: Deactivated successfully.
Jan 21 23:50:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:44 compute-0 podman[254151]: 2026-01-21 23:50:44.559103513 +0000 UTC m=+0.063865290 container create 981c2cafcc13fcffea61182b45ab3892bed33ee2e02765eb1742dc4aebf8d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_edison, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 21 23:50:44 compute-0 systemd[1]: Started libpod-conmon-981c2cafcc13fcffea61182b45ab3892bed33ee2e02765eb1742dc4aebf8d7af.scope.
Jan 21 23:50:44 compute-0 podman[254151]: 2026-01-21 23:50:44.529209437 +0000 UTC m=+0.033971254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:50:44 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:50:44 compute-0 ceph-mon[74318]: pgmap v953: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d85a5542a069fe77b5e14044d91cfac5b155b45f8451db1d9833203eecd1dcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d85a5542a069fe77b5e14044d91cfac5b155b45f8451db1d9833203eecd1dcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d85a5542a069fe77b5e14044d91cfac5b155b45f8451db1d9833203eecd1dcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d85a5542a069fe77b5e14044d91cfac5b155b45f8451db1d9833203eecd1dcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d85a5542a069fe77b5e14044d91cfac5b155b45f8451db1d9833203eecd1dcd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:50:44 compute-0 podman[254151]: 2026-01-21 23:50:44.67498925 +0000 UTC m=+0.179751017 container init 981c2cafcc13fcffea61182b45ab3892bed33ee2e02765eb1742dc4aebf8d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_edison, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:50:44 compute-0 podman[254151]: 2026-01-21 23:50:44.682100222 +0000 UTC m=+0.186861979 container start 981c2cafcc13fcffea61182b45ab3892bed33ee2e02765eb1742dc4aebf8d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_edison, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:50:44 compute-0 podman[254151]: 2026-01-21 23:50:44.687994836 +0000 UTC m=+0.192756763 container attach 981c2cafcc13fcffea61182b45ab3892bed33ee2e02765eb1742dc4aebf8d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:50:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:44.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:45 compute-0 unruffled_edison[254167]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:50:45 compute-0 unruffled_edison[254167]: --> relative data size: 1.0
Jan 21 23:50:45 compute-0 unruffled_edison[254167]: --> All data devices are unavailable
Jan 21 23:50:45 compute-0 systemd[1]: libpod-981c2cafcc13fcffea61182b45ab3892bed33ee2e02765eb1742dc4aebf8d7af.scope: Deactivated successfully.
Jan 21 23:50:45 compute-0 podman[254151]: 2026-01-21 23:50:45.446098721 +0000 UTC m=+0.950860488 container died 981c2cafcc13fcffea61182b45ab3892bed33ee2e02765eb1742dc4aebf8d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:50:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d85a5542a069fe77b5e14044d91cfac5b155b45f8451db1d9833203eecd1dcd-merged.mount: Deactivated successfully.
Jan 21 23:50:45 compute-0 podman[254151]: 2026-01-21 23:50:45.507947417 +0000 UTC m=+1.012709184 container remove 981c2cafcc13fcffea61182b45ab3892bed33ee2e02765eb1742dc4aebf8d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_edison, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 23:50:45 compute-0 systemd[1]: libpod-conmon-981c2cafcc13fcffea61182b45ab3892bed33ee2e02765eb1742dc4aebf8d7af.scope: Deactivated successfully.
Jan 21 23:50:45 compute-0 sudo[254044]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:45 compute-0 sudo[254195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:50:45 compute-0 sudo[254195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:45 compute-0 sudo[254195]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:45 compute-0 sudo[254220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:50:45 compute-0 sudo[254220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:45 compute-0 sudo[254220]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:45 compute-0 sudo[254245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:50:45 compute-0 sudo[254245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:45 compute-0 sudo[254245]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:45 compute-0 sudo[254270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:50:45 compute-0 sudo[254270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:45 compute-0 sudo[254295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:50:45 compute-0 sudo[254295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:45 compute-0 sudo[254295]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:46 compute-0 sudo[254320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:50:46 compute-0 sudo[254320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:46 compute-0 sudo[254320]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:50:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:46.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:50:46 compute-0 podman[254383]: 2026-01-21 23:50:46.221704114 +0000 UTC m=+0.049581632 container create 508f5422634e0d6cb214c6a9ca48d6e66ead6709a7f3c6cc7111f30c9d241412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 21 23:50:46 compute-0 systemd[1]: Started libpod-conmon-508f5422634e0d6cb214c6a9ca48d6e66ead6709a7f3c6cc7111f30c9d241412.scope.
Jan 21 23:50:46 compute-0 podman[254383]: 2026-01-21 23:50:46.195467023 +0000 UTC m=+0.023344581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:50:46 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:50:46 compute-0 podman[254383]: 2026-01-21 23:50:46.319605208 +0000 UTC m=+0.147482726 container init 508f5422634e0d6cb214c6a9ca48d6e66ead6709a7f3c6cc7111f30c9d241412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 21 23:50:46 compute-0 podman[254383]: 2026-01-21 23:50:46.330378405 +0000 UTC m=+0.158255913 container start 508f5422634e0d6cb214c6a9ca48d6e66ead6709a7f3c6cc7111f30c9d241412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 21 23:50:46 compute-0 podman[254383]: 2026-01-21 23:50:46.334286568 +0000 UTC m=+0.162164076 container attach 508f5422634e0d6cb214c6a9ca48d6e66ead6709a7f3c6cc7111f30c9d241412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 21 23:50:46 compute-0 distracted_brattain[254399]: 167 167
Jan 21 23:50:46 compute-0 systemd[1]: libpod-508f5422634e0d6cb214c6a9ca48d6e66ead6709a7f3c6cc7111f30c9d241412.scope: Deactivated successfully.
Jan 21 23:50:46 compute-0 podman[254383]: 2026-01-21 23:50:46.336824997 +0000 UTC m=+0.164702515 container died 508f5422634e0d6cb214c6a9ca48d6e66ead6709a7f3c6cc7111f30c9d241412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 21 23:50:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7039e5b2ec010ab3e32601f83054314ede191449578d95ca11a9d5e8fb169100-merged.mount: Deactivated successfully.
Jan 21 23:50:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:46 compute-0 podman[254383]: 2026-01-21 23:50:46.395194583 +0000 UTC m=+0.223072101 container remove 508f5422634e0d6cb214c6a9ca48d6e66ead6709a7f3c6cc7111f30c9d241412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:50:46 compute-0 systemd[1]: libpod-conmon-508f5422634e0d6cb214c6a9ca48d6e66ead6709a7f3c6cc7111f30c9d241412.scope: Deactivated successfully.
Jan 21 23:50:46 compute-0 podman[254422]: 2026-01-21 23:50:46.645927121 +0000 UTC m=+0.076625030 container create 5e22f93df0c523677c9952478f54000d65a1718f7377f364bdc65db133c510e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:50:46 compute-0 systemd[1]: Started libpod-conmon-5e22f93df0c523677c9952478f54000d65a1718f7377f364bdc65db133c510e7.scope.
Jan 21 23:50:46 compute-0 podman[254422]: 2026-01-21 23:50:46.618518372 +0000 UTC m=+0.049216331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:50:46 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f300da8367485eff75d569700b05a1704d3404a3e54c615b03fa335033386597/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f300da8367485eff75d569700b05a1704d3404a3e54c615b03fa335033386597/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f300da8367485eff75d569700b05a1704d3404a3e54c615b03fa335033386597/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f300da8367485eff75d569700b05a1704d3404a3e54c615b03fa335033386597/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:50:46 compute-0 podman[254422]: 2026-01-21 23:50:46.746489738 +0000 UTC m=+0.177187707 container init 5e22f93df0c523677c9952478f54000d65a1718f7377f364bdc65db133c510e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcclintock, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 23:50:46 compute-0 podman[254422]: 2026-01-21 23:50:46.757270825 +0000 UTC m=+0.187968744 container start 5e22f93df0c523677c9952478f54000d65a1718f7377f364bdc65db133c510e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcclintock, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 23:50:46 compute-0 podman[254422]: 2026-01-21 23:50:46.762133707 +0000 UTC m=+0.192831806 container attach 5e22f93df0c523677c9952478f54000d65a1718f7377f364bdc65db133c510e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:50:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:46.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:47 compute-0 ceph-mon[74318]: pgmap v954: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]: {
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:     "1": [
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:         {
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:             "devices": [
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:                 "/dev/loop3"
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:             ],
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:             "lv_name": "ceph_lv0",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:             "lv_size": "7511998464",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:             "name": "ceph_lv0",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:             "tags": {
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:                 "ceph.cluster_name": "ceph",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:                 "ceph.crush_device_class": "",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:                 "ceph.encrypted": "0",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:                 "ceph.osd_id": "1",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:                 "ceph.type": "block",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:                 "ceph.vdo": "0"
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:             },
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:             "type": "block",
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:             "vg_name": "ceph_vg0"
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:         }
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]:     ]
Jan 21 23:50:47 compute-0 fervent_mcclintock[254439]: }
Jan 21 23:50:47 compute-0 systemd[1]: libpod-5e22f93df0c523677c9952478f54000d65a1718f7377f364bdc65db133c510e7.scope: Deactivated successfully.
Jan 21 23:50:47 compute-0 podman[254422]: 2026-01-21 23:50:47.514083209 +0000 UTC m=+0.944781088 container died 5e22f93df0c523677c9952478f54000d65a1718f7377f364bdc65db133c510e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcclintock, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:50:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f300da8367485eff75d569700b05a1704d3404a3e54c615b03fa335033386597-merged.mount: Deactivated successfully.
Jan 21 23:50:47 compute-0 podman[254422]: 2026-01-21 23:50:47.578881967 +0000 UTC m=+1.009579876 container remove 5e22f93df0c523677c9952478f54000d65a1718f7377f364bdc65db133c510e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mcclintock, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 21 23:50:47 compute-0 systemd[1]: libpod-conmon-5e22f93df0c523677c9952478f54000d65a1718f7377f364bdc65db133c510e7.scope: Deactivated successfully.
Jan 21 23:50:47 compute-0 sudo[254270]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:50:47 compute-0 sudo[254460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:50:47 compute-0 sudo[254460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:47 compute-0 sudo[254460]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:47 compute-0 sudo[254485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:50:47 compute-0 sudo[254485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:47 compute-0 sudo[254485]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:47 compute-0 sudo[254510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:50:47 compute-0 sudo[254510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:47 compute-0 sudo[254510]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:47 compute-0 sudo[254535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:50:47 compute-0 sudo[254535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:48.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:48 compute-0 podman[254600]: 2026-01-21 23:50:48.359550048 +0000 UTC m=+0.050739198 container create 00b54916e8fd81808ffecb47b752b68d91af05f0e3e6e43781008dd27db45158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goodall, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 23:50:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:48 compute-0 systemd[1]: Started libpod-conmon-00b54916e8fd81808ffecb47b752b68d91af05f0e3e6e43781008dd27db45158.scope.
Jan 21 23:50:48 compute-0 podman[254600]: 2026-01-21 23:50:48.334189584 +0000 UTC m=+0.025378794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:50:48 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:50:48 compute-0 podman[254600]: 2026-01-21 23:50:48.461201549 +0000 UTC m=+0.152390709 container init 00b54916e8fd81808ffecb47b752b68d91af05f0e3e6e43781008dd27db45158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goodall, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 21 23:50:48 compute-0 podman[254600]: 2026-01-21 23:50:48.473279098 +0000 UTC m=+0.164468278 container start 00b54916e8fd81808ffecb47b752b68d91af05f0e3e6e43781008dd27db45158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 21 23:50:48 compute-0 podman[254600]: 2026-01-21 23:50:48.477939263 +0000 UTC m=+0.169128443 container attach 00b54916e8fd81808ffecb47b752b68d91af05f0e3e6e43781008dd27db45158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goodall, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:50:48 compute-0 fervent_goodall[254616]: 167 167
Jan 21 23:50:48 compute-0 systemd[1]: libpod-00b54916e8fd81808ffecb47b752b68d91af05f0e3e6e43781008dd27db45158.scope: Deactivated successfully.
Jan 21 23:50:48 compute-0 podman[254600]: 2026-01-21 23:50:48.479287796 +0000 UTC m=+0.170476986 container died 00b54916e8fd81808ffecb47b752b68d91af05f0e3e6e43781008dd27db45158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:50:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c89550f78dddae642db9a7f086515732754e92f8636bf08f0d32e84cb744a326-merged.mount: Deactivated successfully.
Jan 21 23:50:48 compute-0 podman[254600]: 2026-01-21 23:50:48.531079616 +0000 UTC m=+0.222268796 container remove 00b54916e8fd81808ffecb47b752b68d91af05f0e3e6e43781008dd27db45158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 21 23:50:48 compute-0 systemd[1]: libpod-conmon-00b54916e8fd81808ffecb47b752b68d91af05f0e3e6e43781008dd27db45158.scope: Deactivated successfully.
Jan 21 23:50:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:50:48.746 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:50:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:50:48.749 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:50:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:50:48.749 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:50:48 compute-0 podman[254640]: 2026-01-21 23:50:48.781005428 +0000 UTC m=+0.072606963 container create 51f4a43451b937a7ea52c00f83ad3e217861ae5c1e9da3ad9ebd7de5d9418a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:50:48 compute-0 systemd[1]: Started libpod-conmon-51f4a43451b937a7ea52c00f83ad3e217861ae5c1e9da3ad9ebd7de5d9418a78.scope.
Jan 21 23:50:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:48.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:48 compute-0 podman[254640]: 2026-01-21 23:50:48.753369703 +0000 UTC m=+0.044971298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:50:48 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ae5f51cf8acfead77a1b391ed640fb4742fa2a1fc4f1c1153cc46e15f29470/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ae5f51cf8acfead77a1b391ed640fb4742fa2a1fc4f1c1153cc46e15f29470/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ae5f51cf8acfead77a1b391ed640fb4742fa2a1fc4f1c1153cc46e15f29470/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ae5f51cf8acfead77a1b391ed640fb4742fa2a1fc4f1c1153cc46e15f29470/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:50:48 compute-0 podman[254640]: 2026-01-21 23:50:48.891061452 +0000 UTC m=+0.182663047 container init 51f4a43451b937a7ea52c00f83ad3e217861ae5c1e9da3ad9ebd7de5d9418a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:50:48 compute-0 podman[254640]: 2026-01-21 23:50:48.907033932 +0000 UTC m=+0.198635477 container start 51f4a43451b937a7ea52c00f83ad3e217861ae5c1e9da3ad9ebd7de5d9418a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lederberg, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 21 23:50:48 compute-0 podman[254640]: 2026-01-21 23:50:48.910809681 +0000 UTC m=+0.202411226 container attach 51f4a43451b937a7ea52c00f83ad3e217861ae5c1e9da3ad9ebd7de5d9418a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 23:50:49 compute-0 ceph-mon[74318]: pgmap v955: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:49 compute-0 bold_lederberg[254656]: {
Jan 21 23:50:49 compute-0 bold_lederberg[254656]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:50:49 compute-0 bold_lederberg[254656]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:50:49 compute-0 bold_lederberg[254656]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:50:49 compute-0 bold_lederberg[254656]:         "osd_id": 1,
Jan 21 23:50:49 compute-0 bold_lederberg[254656]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:50:49 compute-0 bold_lederberg[254656]:         "type": "bluestore"
Jan 21 23:50:49 compute-0 bold_lederberg[254656]:     }
Jan 21 23:50:49 compute-0 bold_lederberg[254656]: }
Jan 21 23:50:49 compute-0 systemd[1]: libpod-51f4a43451b937a7ea52c00f83ad3e217861ae5c1e9da3ad9ebd7de5d9418a78.scope: Deactivated successfully.
Jan 21 23:50:49 compute-0 podman[254640]: 2026-01-21 23:50:49.886073791 +0000 UTC m=+1.177675336 container died 51f4a43451b937a7ea52c00f83ad3e217861ae5c1e9da3ad9ebd7de5d9418a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lederberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 21 23:50:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-69ae5f51cf8acfead77a1b391ed640fb4742fa2a1fc4f1c1153cc46e15f29470-merged.mount: Deactivated successfully.
Jan 21 23:50:49 compute-0 podman[254640]: 2026-01-21 23:50:49.949241658 +0000 UTC m=+1.240843173 container remove 51f4a43451b937a7ea52c00f83ad3e217861ae5c1e9da3ad9ebd7de5d9418a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lederberg, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 23:50:49 compute-0 systemd[1]: libpod-conmon-51f4a43451b937a7ea52c00f83ad3e217861ae5c1e9da3ad9ebd7de5d9418a78.scope: Deactivated successfully.
Jan 21 23:50:49 compute-0 sudo[254535]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:50:50 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:50:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:50:50 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:50:50 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 20faf5d1-6a07-4467-b578-78b3b1ffed1e does not exist
Jan 21 23:50:50 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 09576c17-b1b2-4c39-ae5c-c6b34e581c5e does not exist
Jan 21 23:50:50 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 7c0e1eb6-2cc0-4a52-907f-153f21d24d02 does not exist
Jan 21 23:50:50 compute-0 sudo[254692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:50:50 compute-0 sudo[254692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:50 compute-0 sudo[254692]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:50.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:50 compute-0 sudo[254717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:50:50 compute-0 sudo[254717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:50:50 compute-0 sudo[254717]: pam_unix(sudo:session): session closed for user root
Jan 21 23:50:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:50.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:51 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:50:51 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:50:51 compute-0 ceph-mon[74318]: pgmap v956: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:50:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:52.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:50:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:50:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:52.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:53 compute-0 ceph-mon[74318]: pgmap v957: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:54.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:50:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:54.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:55 compute-0 ceph-mon[74318]: pgmap v958: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:56 compute-0 podman[254745]: 2026-01-21 23:50:56.035998864 +0000 UTC m=+0.137257607 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 23:50:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:56.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:56.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:57 compute-0 ceph-mon[74318]: pgmap v959: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:50:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:50:58.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:50:58 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:50:58.813 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 23:50:58 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:50:58.814 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 23:50:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:50:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:50:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:50:58.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:50:59 compute-0 ceph-mon[74318]: pgmap v960: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:00.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:00.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:01 compute-0 ceph-mon[74318]: pgmap v961: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:51:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:02.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:51:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:02 compute-0 ceph-mon[74318]: pgmap v962: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:51:02 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:51:02.817 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 23:51:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:51:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:02.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:51:03 compute-0 podman[254773]: 2026-01-21 23:51:03.980068596 +0000 UTC m=+0.097360677 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 21 23:51:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:04.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:04.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:05 compute-0 ceph-mon[74318]: pgmap v963: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:05 compute-0 nova_compute[247516]: 2026-01-21 23:51:05.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:51:05 compute-0 nova_compute[247516]: 2026-01-21 23:51:05.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 23:51:05 compute-0 nova_compute[247516]: 2026-01-21 23:51:05.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 23:51:06 compute-0 nova_compute[247516]: 2026-01-21 23:51:06.024 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 23:51:06 compute-0 sudo[254795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:51:06 compute-0 sudo[254795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:06 compute-0 sudo[254795]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:51:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:06.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:51:06 compute-0 sudo[254820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:51:06 compute-0 sudo[254820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:06 compute-0 sudo[254820]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:06.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:06 compute-0 nova_compute[247516]: 2026-01-21 23:51:06.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:51:06 compute-0 nova_compute[247516]: 2026-01-21 23:51:06.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 23:51:07 compute-0 ceph-mon[74318]: pgmap v964: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:51:07 compute-0 nova_compute[247516]: 2026-01-21 23:51:07.988 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:51:08 compute-0 nova_compute[247516]: 2026-01-21 23:51:08.007 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:51:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:08.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:08.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:08 compute-0 nova_compute[247516]: 2026-01-21 23:51:08.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:51:08 compute-0 nova_compute[247516]: 2026-01-21 23:51:08.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:51:08 compute-0 nova_compute[247516]: 2026-01-21 23:51:08.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:51:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:51:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:51:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:51:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:51:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:51:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:51:09 compute-0 ceph-mon[74318]: pgmap v965: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:09 compute-0 nova_compute[247516]: 2026-01-21 23:51:09.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:51:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:10.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 21 23:51:10 compute-0 ceph-mon[74318]: pgmap v966: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 21 23:51:10 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3482869434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:51:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:10.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:10 compute-0 nova_compute[247516]: 2026-01-21 23:51:10.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:51:11 compute-0 nova_compute[247516]: 2026-01-21 23:51:11.038 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:51:11 compute-0 nova_compute[247516]: 2026-01-21 23:51:11.038 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:51:11 compute-0 nova_compute[247516]: 2026-01-21 23:51:11.039 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:51:11 compute-0 nova_compute[247516]: 2026-01-21 23:51:11.039 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 23:51:11 compute-0 nova_compute[247516]: 2026-01-21 23:51:11.040 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:51:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:51:11 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2997659677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:51:11 compute-0 nova_compute[247516]: 2026-01-21 23:51:11.536 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:51:11 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/121757832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:51:11 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2997659677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:51:11 compute-0 nova_compute[247516]: 2026-01-21 23:51:11.807 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 23:51:11 compute-0 nova_compute[247516]: 2026-01-21 23:51:11.810 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5201MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 23:51:11 compute-0 nova_compute[247516]: 2026-01-21 23:51:11.810 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:51:11 compute-0 nova_compute[247516]: 2026-01-21 23:51:11.811 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:51:12 compute-0 nova_compute[247516]: 2026-01-21 23:51:12.064 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 23:51:12 compute-0 nova_compute[247516]: 2026-01-21 23:51:12.064 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 23:51:12 compute-0 nova_compute[247516]: 2026-01-21 23:51:12.112 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing inventories for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 21 23:51:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:12.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:12 compute-0 nova_compute[247516]: 2026-01-21 23:51:12.378 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Updating ProviderTree inventory for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 21 23:51:12 compute-0 nova_compute[247516]: 2026-01-21 23:51:12.378 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Updating inventory in ProviderTree for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 21 23:51:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 21 23:51:12 compute-0 nova_compute[247516]: 2026-01-21 23:51:12.397 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing aggregate associations for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 21 23:51:12 compute-0 nova_compute[247516]: 2026-01-21 23:51:12.622 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing trait associations for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 21 23:51:12 compute-0 nova_compute[247516]: 2026-01-21 23:51:12.646 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:51:12 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/4000919985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:51:12 compute-0 ceph-mon[74318]: pgmap v967: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 21 23:51:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:51:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:12.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:51:13 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1416784345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:51:13 compute-0 nova_compute[247516]: 2026-01-21 23:51:13.144 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:51:13 compute-0 nova_compute[247516]: 2026-01-21 23:51:13.152 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 23:51:13 compute-0 nova_compute[247516]: 2026-01-21 23:51:13.206 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 23:51:13 compute-0 nova_compute[247516]: 2026-01-21 23:51:13.208 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 23:51:13 compute-0 nova_compute[247516]: 2026-01-21 23:51:13.209 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:51:13 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3467879083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:51:13 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1416784345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:51:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:14.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 21 23:51:14 compute-0 ceph-mon[74318]: pgmap v968: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 21 23:51:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:14.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:16.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:16 compute-0 nova_compute[247516]: 2026-01-21 23:51:16.218 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:51:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 21 23:51:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:16.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:17 compute-0 ceph-mon[74318]: pgmap v969: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 21 23:51:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:51:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:18.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 21 23:51:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:18.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:19 compute-0 ceph-mon[74318]: pgmap v970: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 21 23:51:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:51:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:20.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:51:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 80 MiB data, 213 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 MiB/s wr, 23 op/s
Jan 21 23:51:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:51:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:20.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:51:21 compute-0 ceph-mon[74318]: pgmap v971: 305 pgs: 305 active+clean; 80 MiB data, 213 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 MiB/s wr, 23 op/s
Jan 21 23:51:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:22.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 21 23:51:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:51:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:22.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:23 compute-0 ceph-mon[74318]: pgmap v972: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 21 23:51:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:24.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 21 23:51:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:24.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:25 compute-0 ceph-mon[74318]: pgmap v973: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 21 23:51:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:51:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:26.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:51:26 compute-0 sudo[254900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:51:26 compute-0 sudo[254900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:26 compute-0 sudo[254900]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:26 compute-0 sudo[254926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:51:26 compute-0 sudo[254926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:26 compute-0 sudo[254926]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 21 23:51:26 compute-0 podman[254924]: 2026-01-21 23:51:26.436326093 +0000 UTC m=+0.134962427 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Jan 21 23:51:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2711297109' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:51:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2711297109' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:51:26 compute-0 ceph-mon[74318]: pgmap v974: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 21 23:51:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:51:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:26.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:51:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:51:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/659472456' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:51:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/659472456' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:51:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:51:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:28.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:51:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 21 23:51:28 compute-0 ceph-mon[74318]: pgmap v975: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 21 23:51:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:51:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:28.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:51:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:30.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 49 MiB data, 197 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 21 23:51:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:30.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:31 compute-0 ceph-mon[74318]: pgmap v976: 305 pgs: 305 active+clean; 49 MiB data, 197 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 21 23:51:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:32.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 252 KiB/s wr, 33 op/s
Jan 21 23:51:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:51:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:32.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:33 compute-0 ceph-mon[74318]: pgmap v977: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 252 KiB/s wr, 33 op/s
Jan 21 23:51:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.002000062s ======
Jan 21 23:51:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:34.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000062s
Jan 21 23:51:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 21 23:51:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:51:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:34.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:51:34 compute-0 podman[254980]: 2026-01-21 23:51:34.974878391 +0000 UTC m=+0.083265792 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:51:35 compute-0 ceph-mon[74318]: pgmap v978: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 21 23:51:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:36.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 21 23:51:36 compute-0 ceph-mon[74318]: pgmap v979: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 21 23:51:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:36.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:51:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:51:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:38.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:51:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 21 23:51:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:51:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:38.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:51:39
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['backups', '.mgr', 'volumes', 'cephfs.cephfs.data', 'images', '.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log']
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:51:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:51:39 compute-0 ceph-mon[74318]: pgmap v980: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 21 23:51:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:40.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 15 op/s
Jan 21 23:51:40 compute-0 ceph-mon[74318]: pgmap v981: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 15 op/s
Jan 21 23:51:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:40.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:51:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:42.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:51:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 341 B/s wr, 13 op/s
Jan 21 23:51:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:51:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:51:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:42.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:51:43 compute-0 ceph-mon[74318]: pgmap v982: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 341 B/s wr, 13 op/s
Jan 21 23:51:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:51:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:44.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:51:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:51:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:44.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:51:45 compute-0 ceph-mon[74318]: pgmap v983: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:46.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:46 compute-0 sudo[255005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:51:46 compute-0 sudo[255005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:46 compute-0 sudo[255005]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:46 compute-0 sudo[255030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:51:46 compute-0 sudo[255030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:46 compute-0 sudo[255030]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:51:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:46.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:51:47 compute-0 ceph-mon[74318]: pgmap v984: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:51:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:51:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:48.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:51:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:51:48.747 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:51:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:51:48.748 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:51:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:51:48.748 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:51:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:48.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:49 compute-0 ceph-mon[74318]: pgmap v985: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:50.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:50 compute-0 ceph-mon[74318]: pgmap v986: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:50 compute-0 sudo[255057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:51:50 compute-0 sudo[255057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:50 compute-0 sudo[255057]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:50 compute-0 sudo[255082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:51:50 compute-0 sudo[255082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:50 compute-0 sudo[255082]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:50 compute-0 sudo[255107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:51:50 compute-0 sudo[255107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:50 compute-0 sudo[255107]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:50 compute-0 sudo[255132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:51:50 compute-0 sudo[255132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:50.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:51 compute-0 sudo[255132]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:51:51 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:51:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:51:51 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:51:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:51:51 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:51:51 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev bc5521dc-035e-4bce-9a2c-27bfb33b31c1 does not exist
Jan 21 23:51:51 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 1de61e30-a115-472c-83e1-9146107c5f95 does not exist
Jan 21 23:51:51 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 3dabfa75-bee8-4de9-a7e3-82aa581cad10 does not exist
Jan 21 23:51:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:51:51 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:51:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:51:51 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:51:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:51:51 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:51:51 compute-0 sudo[255190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:51:51 compute-0 sudo[255190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:51 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:51:51 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:51:51 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:51:51 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:51:51 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:51:51 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:51:51 compute-0 sudo[255190]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:51 compute-0 sudo[255215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:51:51 compute-0 sudo[255215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:51 compute-0 sudo[255215]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:51 compute-0 sudo[255240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:51:51 compute-0 sudo[255240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:51 compute-0 sudo[255240]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:51 compute-0 sudo[255265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:51:51 compute-0 sudo[255265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:51:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:52.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:51:52 compute-0 podman[255330]: 2026-01-21 23:51:52.292742566 +0000 UTC m=+0.067143544 container create 30098f8c286a4b0da6f0224cd811623d2339ed75310ca92e68480aedc168c038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:51:52 compute-0 podman[255330]: 2026-01-21 23:51:52.262729059 +0000 UTC m=+0.037130097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:51:52 compute-0 systemd[1]: Started libpod-conmon-30098f8c286a4b0da6f0224cd811623d2339ed75310ca92e68480aedc168c038.scope.
Jan 21 23:51:52 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:51:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:52 compute-0 podman[255330]: 2026-01-21 23:51:52.422857032 +0000 UTC m=+0.197258070 container init 30098f8c286a4b0da6f0224cd811623d2339ed75310ca92e68480aedc168c038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 21 23:51:52 compute-0 podman[255330]: 2026-01-21 23:51:52.431727896 +0000 UTC m=+0.206128864 container start 30098f8c286a4b0da6f0224cd811623d2339ed75310ca92e68480aedc168c038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:51:52 compute-0 podman[255330]: 2026-01-21 23:51:52.435457702 +0000 UTC m=+0.209858730 container attach 30098f8c286a4b0da6f0224cd811623d2339ed75310ca92e68480aedc168c038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:51:52 compute-0 determined_tu[255346]: 167 167
Jan 21 23:51:52 compute-0 systemd[1]: libpod-30098f8c286a4b0da6f0224cd811623d2339ed75310ca92e68480aedc168c038.scope: Deactivated successfully.
Jan 21 23:51:52 compute-0 conmon[255346]: conmon 30098f8c286a4b0da6f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-30098f8c286a4b0da6f0224cd811623d2339ed75310ca92e68480aedc168c038.scope/container/memory.events
Jan 21 23:51:52 compute-0 podman[255330]: 2026-01-21 23:51:52.44122203 +0000 UTC m=+0.215622988 container died 30098f8c286a4b0da6f0224cd811623d2339ed75310ca92e68480aedc168c038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:51:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-4507f25f9e515cb3adec0545f7688c81344813e48c76f09eb3975eb4d5721c30-merged.mount: Deactivated successfully.
Jan 21 23:51:52 compute-0 podman[255330]: 2026-01-21 23:51:52.487508368 +0000 UTC m=+0.261909346 container remove 30098f8c286a4b0da6f0224cd811623d2339ed75310ca92e68480aedc168c038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 23:51:52 compute-0 systemd[1]: libpod-conmon-30098f8c286a4b0da6f0224cd811623d2339ed75310ca92e68480aedc168c038.scope: Deactivated successfully.
Jan 21 23:51:52 compute-0 ceph-mon[74318]: pgmap v987: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:51:52 compute-0 podman[255371]: 2026-01-21 23:51:52.752717085 +0000 UTC m=+0.075022477 container create 872ee8432c28ecd19a65f9106ad2669218a5cb36b180c216195665a5a26b7cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:51:52 compute-0 systemd[1]: Started libpod-conmon-872ee8432c28ecd19a65f9106ad2669218a5cb36b180c216195665a5a26b7cb4.scope.
Jan 21 23:51:52 compute-0 podman[255371]: 2026-01-21 23:51:52.722443961 +0000 UTC m=+0.044749373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:51:52 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbfff0017b9df367729786f7320ecca78fc423a2b8254b4c1bb5563a6093da0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbfff0017b9df367729786f7320ecca78fc423a2b8254b4c1bb5563a6093da0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbfff0017b9df367729786f7320ecca78fc423a2b8254b4c1bb5563a6093da0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbfff0017b9df367729786f7320ecca78fc423a2b8254b4c1bb5563a6093da0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbfff0017b9df367729786f7320ecca78fc423a2b8254b4c1bb5563a6093da0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:51:52 compute-0 podman[255371]: 2026-01-21 23:51:52.855171428 +0000 UTC m=+0.177476910 container init 872ee8432c28ecd19a65f9106ad2669218a5cb36b180c216195665a5a26b7cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 21 23:51:52 compute-0 podman[255371]: 2026-01-21 23:51:52.868402176 +0000 UTC m=+0.190707578 container start 872ee8432c28ecd19a65f9106ad2669218a5cb36b180c216195665a5a26b7cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Jan 21 23:51:52 compute-0 podman[255371]: 2026-01-21 23:51:52.872943776 +0000 UTC m=+0.195249238 container attach 872ee8432c28ecd19a65f9106ad2669218a5cb36b180c216195665a5a26b7cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 23:51:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:52.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:53 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:51:53.122 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 23:51:53 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:51:53.124 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 23:51:53 compute-0 amazing_kowalevski[255387]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:51:53 compute-0 amazing_kowalevski[255387]: --> relative data size: 1.0
Jan 21 23:51:53 compute-0 amazing_kowalevski[255387]: --> All data devices are unavailable
Jan 21 23:51:53 compute-0 systemd[1]: libpod-872ee8432c28ecd19a65f9106ad2669218a5cb36b180c216195665a5a26b7cb4.scope: Deactivated successfully.
Jan 21 23:51:53 compute-0 podman[255371]: 2026-01-21 23:51:53.732991966 +0000 UTC m=+1.055297348 container died 872ee8432c28ecd19a65f9106ad2669218a5cb36b180c216195665a5a26b7cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:51:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbfff0017b9df367729786f7320ecca78fc423a2b8254b4c1bb5563a6093da0c-merged.mount: Deactivated successfully.
Jan 21 23:51:53 compute-0 podman[255371]: 2026-01-21 23:51:53.793483874 +0000 UTC m=+1.115789236 container remove 872ee8432c28ecd19a65f9106ad2669218a5cb36b180c216195665a5a26b7cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:51:53 compute-0 systemd[1]: libpod-conmon-872ee8432c28ecd19a65f9106ad2669218a5cb36b180c216195665a5a26b7cb4.scope: Deactivated successfully.
Jan 21 23:51:53 compute-0 sudo[255265]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:53 compute-0 sudo[255418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:51:53 compute-0 sudo[255418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:53 compute-0 sudo[255418]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:53 compute-0 sudo[255443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:51:53 compute-0 sudo[255443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:53 compute-0 sudo[255443]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:54 compute-0 sudo[255468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:51:54 compute-0 sudo[255468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:54 compute-0 sudo[255468]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:54 compute-0 sudo[255493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:51:54 compute-0 sudo[255493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:54.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:51:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:54 compute-0 podman[255558]: 2026-01-21 23:51:54.6161672 +0000 UTC m=+0.068053742 container create 07412c758886f1d771b513f06c62aff18d3eaa880790692c9b9de5e8443ecd60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 23:51:54 compute-0 systemd[1]: Started libpod-conmon-07412c758886f1d771b513f06c62aff18d3eaa880790692c9b9de5e8443ecd60.scope.
Jan 21 23:51:54 compute-0 podman[255558]: 2026-01-21 23:51:54.592537141 +0000 UTC m=+0.044423703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:51:54 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:51:54 compute-0 podman[255558]: 2026-01-21 23:51:54.720099469 +0000 UTC m=+0.171986011 container init 07412c758886f1d771b513f06c62aff18d3eaa880790692c9b9de5e8443ecd60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 21 23:51:54 compute-0 podman[255558]: 2026-01-21 23:51:54.732845882 +0000 UTC m=+0.184732424 container start 07412c758886f1d771b513f06c62aff18d3eaa880790692c9b9de5e8443ecd60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 21 23:51:54 compute-0 podman[255558]: 2026-01-21 23:51:54.737226887 +0000 UTC m=+0.189113429 container attach 07412c758886f1d771b513f06c62aff18d3eaa880790692c9b9de5e8443ecd60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Jan 21 23:51:54 compute-0 recursing_mendeleev[255575]: 167 167
Jan 21 23:51:54 compute-0 systemd[1]: libpod-07412c758886f1d771b513f06c62aff18d3eaa880790692c9b9de5e8443ecd60.scope: Deactivated successfully.
Jan 21 23:51:54 compute-0 podman[255558]: 2026-01-21 23:51:54.740412385 +0000 UTC m=+0.192298897 container died 07412c758886f1d771b513f06c62aff18d3eaa880790692c9b9de5e8443ecd60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 21 23:51:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-099d7c038a5fc9884095d4df02136a4452a953e53ecf962b294d0c997ee48c66-merged.mount: Deactivated successfully.
Jan 21 23:51:54 compute-0 podman[255558]: 2026-01-21 23:51:54.793910327 +0000 UTC m=+0.245796849 container remove 07412c758886f1d771b513f06c62aff18d3eaa880790692c9b9de5e8443ecd60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:51:54 compute-0 systemd[1]: libpod-conmon-07412c758886f1d771b513f06c62aff18d3eaa880790692c9b9de5e8443ecd60.scope: Deactivated successfully.
Jan 21 23:51:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:54.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:54 compute-0 podman[255599]: 2026-01-21 23:51:54.984429268 +0000 UTC m=+0.048616891 container create febb05497abe9a031ce104b1b076a619df60b11c80cb1c5c7dbcf72c96079a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_curie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:51:55 compute-0 systemd[1]: Started libpod-conmon-febb05497abe9a031ce104b1b076a619df60b11c80cb1c5c7dbcf72c96079a69.scope.
Jan 21 23:51:55 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9117da39bdb4befa07ef3af5c28d728c0e6cdd935a704c736ec17b835ec43ffe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:51:55 compute-0 podman[255599]: 2026-01-21 23:51:54.962402128 +0000 UTC m=+0.026589731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9117da39bdb4befa07ef3af5c28d728c0e6cdd935a704c736ec17b835ec43ffe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9117da39bdb4befa07ef3af5c28d728c0e6cdd935a704c736ec17b835ec43ffe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9117da39bdb4befa07ef3af5c28d728c0e6cdd935a704c736ec17b835ec43ffe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:51:55 compute-0 podman[255599]: 2026-01-21 23:51:55.084307931 +0000 UTC m=+0.148495594 container init febb05497abe9a031ce104b1b076a619df60b11c80cb1c5c7dbcf72c96079a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:51:55 compute-0 podman[255599]: 2026-01-21 23:51:55.09427888 +0000 UTC m=+0.158466463 container start febb05497abe9a031ce104b1b076a619df60b11c80cb1c5c7dbcf72c96079a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_curie, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 21 23:51:55 compute-0 podman[255599]: 2026-01-21 23:51:55.097916202 +0000 UTC m=+0.162103835 container attach febb05497abe9a031ce104b1b076a619df60b11c80cb1c5c7dbcf72c96079a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_curie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:51:55 compute-0 ceph-mon[74318]: pgmap v988: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:55 compute-0 determined_curie[255615]: {
Jan 21 23:51:55 compute-0 determined_curie[255615]:     "1": [
Jan 21 23:51:55 compute-0 determined_curie[255615]:         {
Jan 21 23:51:55 compute-0 determined_curie[255615]:             "devices": [
Jan 21 23:51:55 compute-0 determined_curie[255615]:                 "/dev/loop3"
Jan 21 23:51:55 compute-0 determined_curie[255615]:             ],
Jan 21 23:51:55 compute-0 determined_curie[255615]:             "lv_name": "ceph_lv0",
Jan 21 23:51:55 compute-0 determined_curie[255615]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:51:55 compute-0 determined_curie[255615]:             "lv_size": "7511998464",
Jan 21 23:51:55 compute-0 determined_curie[255615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:51:55 compute-0 determined_curie[255615]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:51:55 compute-0 determined_curie[255615]:             "name": "ceph_lv0",
Jan 21 23:51:55 compute-0 determined_curie[255615]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:51:55 compute-0 determined_curie[255615]:             "tags": {
Jan 21 23:51:55 compute-0 determined_curie[255615]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:51:55 compute-0 determined_curie[255615]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:51:55 compute-0 determined_curie[255615]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:51:55 compute-0 determined_curie[255615]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:51:55 compute-0 determined_curie[255615]:                 "ceph.cluster_name": "ceph",
Jan 21 23:51:55 compute-0 determined_curie[255615]:                 "ceph.crush_device_class": "",
Jan 21 23:51:55 compute-0 determined_curie[255615]:                 "ceph.encrypted": "0",
Jan 21 23:51:55 compute-0 determined_curie[255615]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:51:55 compute-0 determined_curie[255615]:                 "ceph.osd_id": "1",
Jan 21 23:51:55 compute-0 determined_curie[255615]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:51:55 compute-0 determined_curie[255615]:                 "ceph.type": "block",
Jan 21 23:51:55 compute-0 determined_curie[255615]:                 "ceph.vdo": "0"
Jan 21 23:51:55 compute-0 determined_curie[255615]:             },
Jan 21 23:51:55 compute-0 determined_curie[255615]:             "type": "block",
Jan 21 23:51:55 compute-0 determined_curie[255615]:             "vg_name": "ceph_vg0"
Jan 21 23:51:55 compute-0 determined_curie[255615]:         }
Jan 21 23:51:55 compute-0 determined_curie[255615]:     ]
Jan 21 23:51:55 compute-0 determined_curie[255615]: }
Jan 21 23:51:55 compute-0 systemd[1]: libpod-febb05497abe9a031ce104b1b076a619df60b11c80cb1c5c7dbcf72c96079a69.scope: Deactivated successfully.
Jan 21 23:51:55 compute-0 podman[255599]: 2026-01-21 23:51:55.899413354 +0000 UTC m=+0.963600967 container died febb05497abe9a031ce104b1b076a619df60b11c80cb1c5c7dbcf72c96079a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 21 23:51:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9117da39bdb4befa07ef3af5c28d728c0e6cdd935a704c736ec17b835ec43ffe-merged.mount: Deactivated successfully.
Jan 21 23:51:55 compute-0 podman[255599]: 2026-01-21 23:51:55.963284766 +0000 UTC m=+1.027472359 container remove febb05497abe9a031ce104b1b076a619df60b11c80cb1c5c7dbcf72c96079a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_curie, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:51:55 compute-0 systemd[1]: libpod-conmon-febb05497abe9a031ce104b1b076a619df60b11c80cb1c5c7dbcf72c96079a69.scope: Deactivated successfully.
Jan 21 23:51:55 compute-0 sudo[255493]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:56 compute-0 sudo[255639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:51:56 compute-0 sudo[255639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:56 compute-0 sudo[255639]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:56 compute-0 sudo[255664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:51:56 compute-0 sudo[255664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:56 compute-0 sudo[255664]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:56 compute-0 sudo[255689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:51:56 compute-0 sudo[255689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:56 compute-0 sudo[255689]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:51:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:56.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:51:56 compute-0 sudo[255714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:51:56 compute-0 sudo[255714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:56 compute-0 podman[255780]: 2026-01-21 23:51:56.805186096 +0000 UTC m=+0.066076602 container create 304544d82957afb688a2d9973fca6abbca1594f37e4478a4d524610d26c95675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 23:51:56 compute-0 systemd[1]: Started libpod-conmon-304544d82957afb688a2d9973fca6abbca1594f37e4478a4d524610d26c95675.scope.
Jan 21 23:51:56 compute-0 podman[255780]: 2026-01-21 23:51:56.785412285 +0000 UTC m=+0.046302851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:51:56 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:51:56 compute-0 podman[255780]: 2026-01-21 23:51:56.915766049 +0000 UTC m=+0.176656575 container init 304544d82957afb688a2d9973fca6abbca1594f37e4478a4d524610d26c95675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 21 23:51:56 compute-0 podman[255780]: 2026-01-21 23:51:56.923039314 +0000 UTC m=+0.183929820 container start 304544d82957afb688a2d9973fca6abbca1594f37e4478a4d524610d26c95675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:51:56 compute-0 brave_ritchie[255796]: 167 167
Jan 21 23:51:56 compute-0 systemd[1]: libpod-304544d82957afb688a2d9973fca6abbca1594f37e4478a4d524610d26c95675.scope: Deactivated successfully.
Jan 21 23:51:56 compute-0 podman[255780]: 2026-01-21 23:51:56.928971187 +0000 UTC m=+0.189861713 container attach 304544d82957afb688a2d9973fca6abbca1594f37e4478a4d524610d26c95675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 23:51:56 compute-0 podman[255780]: 2026-01-21 23:51:56.929337728 +0000 UTC m=+0.190228254 container died 304544d82957afb688a2d9973fca6abbca1594f37e4478a4d524610d26c95675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 23:51:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:56.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff63910dab176177b89e1df27a7070ff268530bc67fe0ae7e1ff9667a68c9fc9-merged.mount: Deactivated successfully.
Jan 21 23:51:56 compute-0 podman[255780]: 2026-01-21 23:51:56.966481364 +0000 UTC m=+0.227371870 container remove 304544d82957afb688a2d9973fca6abbca1594f37e4478a4d524610d26c95675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 21 23:51:56 compute-0 podman[255793]: 2026-01-21 23:51:56.971897092 +0000 UTC m=+0.117310243 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 21 23:51:56 compute-0 systemd[1]: libpod-conmon-304544d82957afb688a2d9973fca6abbca1594f37e4478a4d524610d26c95675.scope: Deactivated successfully.
Jan 21 23:51:57 compute-0 podman[255848]: 2026-01-21 23:51:57.174865358 +0000 UTC m=+0.062556853 container create 257003e88a36a51f68faf40a1730d8da0c8b03809ada6a6c9b0897f70a342b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:51:57 compute-0 systemd[1]: Started libpod-conmon-257003e88a36a51f68faf40a1730d8da0c8b03809ada6a6c9b0897f70a342b6a.scope.
Jan 21 23:51:57 compute-0 podman[255848]: 2026-01-21 23:51:57.155098668 +0000 UTC m=+0.042790143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:51:57 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:51:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2604815627f1df02a97ea98ac26686506033b690c63b1fa60d9d52440b30ede/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:51:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2604815627f1df02a97ea98ac26686506033b690c63b1fa60d9d52440b30ede/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:51:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2604815627f1df02a97ea98ac26686506033b690c63b1fa60d9d52440b30ede/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:51:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2604815627f1df02a97ea98ac26686506033b690c63b1fa60d9d52440b30ede/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:51:57 compute-0 podman[255848]: 2026-01-21 23:51:57.280628643 +0000 UTC m=+0.168320158 container init 257003e88a36a51f68faf40a1730d8da0c8b03809ada6a6c9b0897f70a342b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:51:57 compute-0 podman[255848]: 2026-01-21 23:51:57.293268032 +0000 UTC m=+0.180959487 container start 257003e88a36a51f68faf40a1730d8da0c8b03809ada6a6c9b0897f70a342b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 21 23:51:57 compute-0 podman[255848]: 2026-01-21 23:51:57.297408661 +0000 UTC m=+0.185100176 container attach 257003e88a36a51f68faf40a1730d8da0c8b03809ada6a6c9b0897f70a342b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 21 23:51:57 compute-0 ceph-mon[74318]: pgmap v989: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:51:58 compute-0 keen_rhodes[255865]: {
Jan 21 23:51:58 compute-0 keen_rhodes[255865]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:51:58 compute-0 keen_rhodes[255865]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:51:58 compute-0 keen_rhodes[255865]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:51:58 compute-0 keen_rhodes[255865]:         "osd_id": 1,
Jan 21 23:51:58 compute-0 keen_rhodes[255865]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:51:58 compute-0 keen_rhodes[255865]:         "type": "bluestore"
Jan 21 23:51:58 compute-0 keen_rhodes[255865]:     }
Jan 21 23:51:58 compute-0 keen_rhodes[255865]: }
Jan 21 23:51:58 compute-0 systemd[1]: libpod-257003e88a36a51f68faf40a1730d8da0c8b03809ada6a6c9b0897f70a342b6a.scope: Deactivated successfully.
Jan 21 23:51:58 compute-0 podman[255848]: 2026-01-21 23:51:58.196413313 +0000 UTC m=+1.084104768 container died 257003e88a36a51f68faf40a1730d8da0c8b03809ada6a6c9b0897f70a342b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 21 23:51:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:51:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:51:58.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:51:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2604815627f1df02a97ea98ac26686506033b690c63b1fa60d9d52440b30ede-merged.mount: Deactivated successfully.
Jan 21 23:51:58 compute-0 podman[255848]: 2026-01-21 23:51:58.266968591 +0000 UTC m=+1.154660086 container remove 257003e88a36a51f68faf40a1730d8da0c8b03809ada6a6c9b0897f70a342b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:51:58 compute-0 systemd[1]: libpod-conmon-257003e88a36a51f68faf40a1730d8da0c8b03809ada6a6c9b0897f70a342b6a.scope: Deactivated successfully.
Jan 21 23:51:58 compute-0 sudo[255714]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:51:58 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:51:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:51:58 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:51:58 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 9ffbcb11-e23d-4a3e-838b-f2cd71865ad5 does not exist
Jan 21 23:51:58 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev b5b41d43-42d6-4a6d-aaa5-ff6389e63b9b does not exist
Jan 21 23:51:58 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 2ea663ca-fdf9-4e3f-b5a2-ea8b096db970 does not exist
Jan 21 23:51:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:51:58 compute-0 sudo[255901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:51:58 compute-0 sudo[255901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:58 compute-0 sudo[255901]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:58 compute-0 sudo[255926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:51:58 compute-0 sudo[255926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:51:58 compute-0 sudo[255926]: pam_unix(sudo:session): session closed for user root
Jan 21 23:51:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:51:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:51:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:51:58.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:51:59 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:51:59 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:51:59 compute-0 ceph-mon[74318]: pgmap v990: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:52:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:00.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:52:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:00.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:01 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:52:01.128 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 23:52:01 compute-0 ceph-mon[74318]: pgmap v991: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:52:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:02.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:52:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:02 compute-0 ceph-mon[74318]: pgmap v992: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:52:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:02.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:04.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:52:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:04.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:52:05 compute-0 ceph-mon[74318]: pgmap v993: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:05 compute-0 podman[255955]: 2026-01-21 23:52:05.989659122 +0000 UTC m=+0.087329508 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 21 23:52:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:06.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:06 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 21 23:52:06 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 21 23:52:06 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 21 23:52:06 compute-0 sudo[255975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:52:06 compute-0 sudo[255975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:52:06 compute-0 sudo[255975]: pam_unix(sudo:session): session closed for user root
Jan 21 23:52:06 compute-0 sudo[256000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:52:06 compute-0 sudo[256000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:52:06 compute-0 sudo[256000]: pam_unix(sudo:session): session closed for user root
Jan 21 23:52:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:06.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:07 compute-0 ceph-mon[74318]: pgmap v994: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:07 compute-0 ceph-mon[74318]: osdmap e149: 3 total, 3 up, 3 in
Jan 21 23:52:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:52:07 compute-0 nova_compute[247516]: 2026-01-21 23:52:07.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:52:07 compute-0 nova_compute[247516]: 2026-01-21 23:52:07.994 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 23:52:07 compute-0 nova_compute[247516]: 2026-01-21 23:52:07.994 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 23:52:08 compute-0 nova_compute[247516]: 2026-01-21 23:52:08.009 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 23:52:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:52:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:08.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:52:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 614 B/s wr, 3 op/s
Jan 21 23:52:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 21 23:52:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 21 23:52:08 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 21 23:52:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:08.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:08 compute-0 nova_compute[247516]: 2026-01-21 23:52:08.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:52:08 compute-0 nova_compute[247516]: 2026-01-21 23:52:08.991 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 23:52:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:52:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:52:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:52:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:52:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:52:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:52:09 compute-0 ceph-mon[74318]: pgmap v996: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 614 B/s wr, 3 op/s
Jan 21 23:52:09 compute-0 ceph-mon[74318]: osdmap e150: 3 total, 3 up, 3 in
Jan 21 23:52:09 compute-0 nova_compute[247516]: 2026-01-21 23:52:09.987 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:52:09 compute-0 nova_compute[247516]: 2026-01-21 23:52:09.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:52:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:10.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 54 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.6 MiB/s wr, 51 op/s
Jan 21 23:52:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:10.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:10 compute-0 nova_compute[247516]: 2026-01-21 23:52:10.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:52:10 compute-0 nova_compute[247516]: 2026-01-21 23:52:10.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:52:11 compute-0 ceph-mon[74318]: pgmap v998: 305 pgs: 305 active+clean; 54 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.6 MiB/s wr, 51 op/s
Jan 21 23:52:11 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/4006548528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:52:11 compute-0 nova_compute[247516]: 2026-01-21 23:52:11.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:52:11 compute-0 nova_compute[247516]: 2026-01-21 23:52:11.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:52:12 compute-0 nova_compute[247516]: 2026-01-21 23:52:12.031 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:52:12 compute-0 nova_compute[247516]: 2026-01-21 23:52:12.031 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:52:12 compute-0 nova_compute[247516]: 2026-01-21 23:52:12.031 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:52:12 compute-0 nova_compute[247516]: 2026-01-21 23:52:12.031 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 23:52:12 compute-0 nova_compute[247516]: 2026-01-21 23:52:12.032 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:52:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:12.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 52 op/s
Jan 21 23:52:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:52:12 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3447581567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:52:12 compute-0 nova_compute[247516]: 2026-01-21 23:52:12.512 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:52:12 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/242651516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:52:12 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/257943759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:52:12 compute-0 ceph-mon[74318]: pgmap v999: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 52 op/s
Jan 21 23:52:12 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3447581567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:52:12 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1068368365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:52:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:52:12 compute-0 nova_compute[247516]: 2026-01-21 23:52:12.756 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 23:52:12 compute-0 nova_compute[247516]: 2026-01-21 23:52:12.758 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5174MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 23:52:12 compute-0 nova_compute[247516]: 2026-01-21 23:52:12.759 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:52:12 compute-0 nova_compute[247516]: 2026-01-21 23:52:12.759 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:52:12 compute-0 nova_compute[247516]: 2026-01-21 23:52:12.851 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 23:52:12 compute-0 nova_compute[247516]: 2026-01-21 23:52:12.852 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 23:52:12 compute-0 nova_compute[247516]: 2026-01-21 23:52:12.878 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:52:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:52:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:12.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:52:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:52:13 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/245461766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:52:13 compute-0 nova_compute[247516]: 2026-01-21 23:52:13.318 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:52:13 compute-0 nova_compute[247516]: 2026-01-21 23:52:13.325 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 23:52:13 compute-0 nova_compute[247516]: 2026-01-21 23:52:13.344 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 23:52:13 compute-0 nova_compute[247516]: 2026-01-21 23:52:13.347 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 23:52:13 compute-0 nova_compute[247516]: 2026-01-21 23:52:13.347 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:52:13 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/245461766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:52:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:52:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:14.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:52:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 52 op/s
Jan 21 23:52:14 compute-0 ceph-mon[74318]: pgmap v1000: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 52 op/s
Jan 21 23:52:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:14.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:15 compute-0 nova_compute[247516]: 2026-01-21 23:52:15.347 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:52:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:52:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:16.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:52:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 42 op/s
Jan 21 23:52:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:16.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:17 compute-0 ceph-mon[74318]: pgmap v1001: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 42 op/s
Jan 21 23:52:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:52:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:18.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 38 op/s
Jan 21 23:52:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:52:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:18.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:52:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 21 23:52:19 compute-0 ceph-mon[74318]: pgmap v1002: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 38 op/s
Jan 21 23:52:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 21 23:52:19 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 21 23:52:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:52:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:20.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:52:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 821 KiB/s wr, 23 op/s
Jan 21 23:52:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 21 23:52:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 21 23:52:20 compute-0 ceph-mon[74318]: osdmap e151: 3 total, 3 up, 3 in
Jan 21 23:52:20 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 21 23:52:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:20.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:21 compute-0 ceph-mon[74318]: pgmap v1004: 305 pgs: 305 active+clean; 62 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 821 KiB/s wr, 23 op/s
Jan 21 23:52:21 compute-0 ceph-mon[74318]: osdmap e152: 3 total, 3 up, 3 in
Jan 21 23:52:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:22.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 54 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 2.9 KiB/s wr, 30 op/s
Jan 21 23:52:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 21 23:52:22 compute-0 ceph-mon[74318]: pgmap v1006: 305 pgs: 305 active+clean; 54 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 2.9 KiB/s wr, 30 op/s
Jan 21 23:52:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 21 23:52:22 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 21 23:52:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:52:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:52:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:22.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:52:23 compute-0 ceph-mon[74318]: osdmap e153: 3 total, 3 up, 3 in
Jan 21 23:52:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:24.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 54 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 3.8 KiB/s wr, 41 op/s
Jan 21 23:52:24 compute-0 ceph-mon[74318]: pgmap v1008: 305 pgs: 305 active+clean; 54 MiB data, 207 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 3.8 KiB/s wr, 41 op/s
Jan 21 23:52:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:52:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:24.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:52:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3130527719' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:52:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3130527719' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:52:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:52:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:26.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:52:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 6.5 KiB/s wr, 93 op/s
Jan 21 23:52:26 compute-0 ceph-mon[74318]: pgmap v1009: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 6.5 KiB/s wr, 93 op/s
Jan 21 23:52:26 compute-0 sudo[256079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:52:26 compute-0 sudo[256079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:52:26 compute-0 sudo[256079]: pam_unix(sudo:session): session closed for user root
Jan 21 23:52:26 compute-0 sudo[256104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:52:26 compute-0 sudo[256104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:52:26 compute-0 sudo[256104]: pam_unix(sudo:session): session closed for user root
Jan 21 23:52:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:27.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:52:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 21 23:52:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 21 23:52:27 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 21 23:52:28 compute-0 podman[256130]: 2026-01-21 23:52:28.015000387 +0000 UTC m=+0.119571043 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 21 23:52:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:28.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 2.9 KiB/s wr, 53 op/s
Jan 21 23:52:28 compute-0 ceph-mon[74318]: osdmap e154: 3 total, 3 up, 3 in
Jan 21 23:52:28 compute-0 ceph-mon[74318]: pgmap v1011: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 2.9 KiB/s wr, 53 op/s
Jan 21 23:52:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:29.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Jan 21 23:52:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Jan 21 23:52:30 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Jan 21 23:52:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:30.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 4.0 KiB/s wr, 74 op/s
Jan 21 23:52:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:52:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:31.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:52:31 compute-0 ceph-mon[74318]: osdmap e155: 3 total, 3 up, 3 in
Jan 21 23:52:31 compute-0 ceph-mon[74318]: pgmap v1013: 305 pgs: 305 active+clean; 41 MiB data, 198 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 4.0 KiB/s wr, 74 op/s
Jan 21 23:52:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:52:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:32.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:52:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 4.5 KiB/s wr, 79 op/s
Jan 21 23:52:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:52:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Jan 21 23:52:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Jan 21 23:52:32 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Jan 21 23:52:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:33.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:33 compute-0 ceph-mon[74318]: pgmap v1014: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 4.5 KiB/s wr, 79 op/s
Jan 21 23:52:33 compute-0 ceph-mon[74318]: osdmap e156: 3 total, 3 up, 3 in
Jan 21 23:52:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:52:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:34.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:52:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 2.1 KiB/s wr, 34 op/s
Jan 21 23:52:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:52:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:35.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:52:35 compute-0 ceph-mon[74318]: pgmap v1016: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 2.1 KiB/s wr, 34 op/s
Jan 21 23:52:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:36.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 21 23:52:36 compute-0 podman[256161]: 2026-01-21 23:52:36.942721136 +0000 UTC m=+0.058682102 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 21 23:52:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:37.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:37 compute-0 ceph-mon[74318]: pgmap v1017: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 21 23:52:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:52:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Jan 21 23:52:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Jan 21 23:52:37 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Jan 21 23:52:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:38.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 5.2 KiB/s rd, 639 B/s wr, 8 op/s
Jan 21 23:52:38 compute-0 ceph-mon[74318]: osdmap e157: 3 total, 3 up, 3 in
Jan 21 23:52:38 compute-0 ceph-mon[74318]: pgmap v1019: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 5.2 KiB/s rd, 639 B/s wr, 8 op/s
Jan 21 23:52:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:39.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:52:39
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'images', '.rgw.root']
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:52:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:52:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:40.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 21 23:52:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:52:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:41.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:52:41 compute-0 ceph-mon[74318]: pgmap v1020: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 21 23:52:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:52:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:42.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:52:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Jan 21 23:52:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:52:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:43.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:43 compute-0 ceph-mon[74318]: pgmap v1021: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Jan 21 23:52:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:44.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Jan 21 23:52:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:52:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:45.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:52:45 compute-0 ceph-mon[74318]: pgmap v1022: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Jan 21 23:52:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:46.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:46 compute-0 ceph-mon[74318]: pgmap v1023: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:47.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:47 compute-0 sudo[256185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:52:47 compute-0 sudo[256185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:52:47 compute-0 sudo[256185]: pam_unix(sudo:session): session closed for user root
Jan 21 23:52:47 compute-0 sudo[256210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:52:47 compute-0 sudo[256210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:52:47 compute-0 sudo[256210]: pam_unix(sudo:session): session closed for user root
Jan 21 23:52:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:52:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:48.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:52:48.748 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:52:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:52:48.749 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:52:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:52:48.749 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:52:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:49.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:49 compute-0 ceph-mon[74318]: pgmap v1024: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:50.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:50 compute-0 ceph-mon[74318]: pgmap v1025: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:51.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 23:52:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5334 writes, 23K keys, 5333 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 5334 writes, 5333 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1507 writes, 6652 keys, 1507 commit groups, 1.0 writes per commit group, ingest: 10.25 MB, 0.02 MB/s
                                           Interval WAL: 1507 writes, 1507 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     91.8      0.31              0.10        13    0.023       0      0       0.0       0.0
                                             L6      1/0    7.13 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.6    120.3     99.6      1.01              0.38        12    0.084     56K   6387       0.0       0.0
                                            Sum      1/0    7.13 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.6     92.3     97.8      1.31              0.48        25    0.053     56K   6387       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.9    106.3    105.5      0.55              0.22        12    0.046     29K   3039       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    120.3     99.6      1.01              0.38        12    0.084     56K   6387       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     93.1      0.30              0.10        12    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.027, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.13 GB write, 0.07 MB/s write, 0.12 GB read, 0.07 MB/s read, 1.3 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f1db2f1f0#2 capacity: 304.00 MB usage: 10.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(584,10.14 MB,3.33522%) FilterBlock(26,162.17 KB,0.0520957%) IndexBlock(26,304.84 KB,0.0979273%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 21 23:52:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:52.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:52:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:53.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:53 compute-0 ceph-mon[74318]: pgmap v1026: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:52:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:52:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:54.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:52:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:54 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:52:54.767 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 23:52:54 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:52:54.768 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 23:52:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:55.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:55 compute-0 ceph-mon[74318]: pgmap v1027: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:56.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:57.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:57 compute-0 ceph-mon[74318]: pgmap v1028: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:52:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:52:58.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:58 compute-0 ceph-mon[74318]: pgmap v1029: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:52:58 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:52:58.770 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 23:52:58 compute-0 sudo[256247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:52:58 compute-0 sudo[256247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:52:58 compute-0 sudo[256247]: pam_unix(sudo:session): session closed for user root
Jan 21 23:52:59 compute-0 podman[256241]: 2026-01-21 23:52:59.020619894 +0000 UTC m=+0.140266272 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:52:59 compute-0 sudo[256289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:52:59 compute-0 sudo[256289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:52:59 compute-0 sudo[256289]: pam_unix(sudo:session): session closed for user root
Jan 21 23:52:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:52:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:52:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:52:59.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:52:59 compute-0 sudo[256317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:52:59 compute-0 sudo[256317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:52:59 compute-0 sudo[256317]: pam_unix(sudo:session): session closed for user root
Jan 21 23:52:59 compute-0 sudo[256342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:52:59 compute-0 sudo[256342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:52:59 compute-0 sudo[256342]: pam_unix(sudo:session): session closed for user root
Jan 21 23:52:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:52:59 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:52:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:52:59 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:52:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:52:59 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:52:59 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 73e24dd7-5a01-4b91-9fd7-a3abef2cab42 does not exist
Jan 21 23:52:59 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 25290950-fe00-4e70-bd53-ce915fe57465 does not exist
Jan 21 23:52:59 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 2f131fe2-ac61-4b80-b84c-ff5b2a72fed4 does not exist
Jan 21 23:52:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:52:59 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:52:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:52:59 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:52:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:52:59 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:52:59 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:52:59 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:52:59 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:52:59 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:52:59 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:52:59 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:52:59 compute-0 sudo[256399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:52:59 compute-0 sudo[256399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:52:59 compute-0 sudo[256399]: pam_unix(sudo:session): session closed for user root
Jan 21 23:52:59 compute-0 sudo[256424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:52:59 compute-0 sudo[256424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:52:59 compute-0 sudo[256424]: pam_unix(sudo:session): session closed for user root
Jan 21 23:52:59 compute-0 sudo[256449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:52:59 compute-0 sudo[256449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:52:59 compute-0 sudo[256449]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:00 compute-0 sudo[256474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:53:00 compute-0 sudo[256474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:53:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:53:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:00.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:53:00 compute-0 podman[256540]: 2026-01-21 23:53:00.426237966 +0000 UTC m=+0.068032141 container create b4547f26590656e37d9679524c914f015ce4ca619b69bc4caafde3efa5360b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elbakyan, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:53:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:00 compute-0 systemd[1]: Started libpod-conmon-b4547f26590656e37d9679524c914f015ce4ca619b69bc4caafde3efa5360b47.scope.
Jan 21 23:53:00 compute-0 podman[256540]: 2026-01-21 23:53:00.397379045 +0000 UTC m=+0.039173280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:53:00 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:53:00 compute-0 podman[256540]: 2026-01-21 23:53:00.522036753 +0000 UTC m=+0.163830968 container init b4547f26590656e37d9679524c914f015ce4ca619b69bc4caafde3efa5360b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:53:00 compute-0 podman[256540]: 2026-01-21 23:53:00.532807685 +0000 UTC m=+0.174601860 container start b4547f26590656e37d9679524c914f015ce4ca619b69bc4caafde3efa5360b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 21 23:53:00 compute-0 podman[256540]: 2026-01-21 23:53:00.537240932 +0000 UTC m=+0.179035167 container attach b4547f26590656e37d9679524c914f015ce4ca619b69bc4caafde3efa5360b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 21 23:53:00 compute-0 keen_elbakyan[256557]: 167 167
Jan 21 23:53:00 compute-0 systemd[1]: libpod-b4547f26590656e37d9679524c914f015ce4ca619b69bc4caafde3efa5360b47.scope: Deactivated successfully.
Jan 21 23:53:00 compute-0 podman[256540]: 2026-01-21 23:53:00.542859956 +0000 UTC m=+0.184654121 container died b4547f26590656e37d9679524c914f015ce4ca619b69bc4caafde3efa5360b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 21 23:53:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce3ea36c60e9795cdbc70c5173ca0b672be77806384d7c299369bba80375b276-merged.mount: Deactivated successfully.
Jan 21 23:53:00 compute-0 podman[256540]: 2026-01-21 23:53:00.600789924 +0000 UTC m=+0.242584099 container remove b4547f26590656e37d9679524c914f015ce4ca619b69bc4caafde3efa5360b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 23:53:00 compute-0 systemd[1]: libpod-conmon-b4547f26590656e37d9679524c914f015ce4ca619b69bc4caafde3efa5360b47.scope: Deactivated successfully.
Jan 21 23:53:00 compute-0 ceph-mon[74318]: pgmap v1030: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:00 compute-0 podman[256582]: 2026-01-21 23:53:00.84396002 +0000 UTC m=+0.064443360 container create 41b1db889a0f33895f4795fb8ad1aa977a81fd638caed30149e75cc4e708e4b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 23:53:00 compute-0 systemd[1]: Started libpod-conmon-41b1db889a0f33895f4795fb8ad1aa977a81fd638caed30149e75cc4e708e4b8.scope.
Jan 21 23:53:00 compute-0 podman[256582]: 2026-01-21 23:53:00.816500063 +0000 UTC m=+0.036983503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:53:00 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:53:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436affd59c28283b794815b3f8d3be80a4fb2f9b444d74569d0f535f52b25f81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:53:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436affd59c28283b794815b3f8d3be80a4fb2f9b444d74569d0f535f52b25f81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:53:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436affd59c28283b794815b3f8d3be80a4fb2f9b444d74569d0f535f52b25f81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:53:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436affd59c28283b794815b3f8d3be80a4fb2f9b444d74569d0f535f52b25f81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:53:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436affd59c28283b794815b3f8d3be80a4fb2f9b444d74569d0f535f52b25f81/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:53:00 compute-0 podman[256582]: 2026-01-21 23:53:00.948714175 +0000 UTC m=+0.169197515 container init 41b1db889a0f33895f4795fb8ad1aa977a81fd638caed30149e75cc4e708e4b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_easley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:53:00 compute-0 podman[256582]: 2026-01-21 23:53:00.954334468 +0000 UTC m=+0.174817818 container start 41b1db889a0f33895f4795fb8ad1aa977a81fd638caed30149e75cc4e708e4b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_easley, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:53:00 compute-0 podman[256582]: 2026-01-21 23:53:00.957824836 +0000 UTC m=+0.178308176 container attach 41b1db889a0f33895f4795fb8ad1aa977a81fd638caed30149e75cc4e708e4b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_easley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:53:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:01.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:01 compute-0 cool_easley[256599]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:53:01 compute-0 cool_easley[256599]: --> relative data size: 1.0
Jan 21 23:53:01 compute-0 cool_easley[256599]: --> All data devices are unavailable
Jan 21 23:53:01 compute-0 systemd[1]: libpod-41b1db889a0f33895f4795fb8ad1aa977a81fd638caed30149e75cc4e708e4b8.scope: Deactivated successfully.
Jan 21 23:53:01 compute-0 podman[256615]: 2026-01-21 23:53:01.928889713 +0000 UTC m=+0.044530116 container died 41b1db889a0f33895f4795fb8ad1aa977a81fd638caed30149e75cc4e708e4b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_easley, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 23:53:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-436affd59c28283b794815b3f8d3be80a4fb2f9b444d74569d0f535f52b25f81-merged.mount: Deactivated successfully.
Jan 21 23:53:02 compute-0 podman[256615]: 2026-01-21 23:53:02.007970814 +0000 UTC m=+0.123611157 container remove 41b1db889a0f33895f4795fb8ad1aa977a81fd638caed30149e75cc4e708e4b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_easley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 21 23:53:02 compute-0 systemd[1]: libpod-conmon-41b1db889a0f33895f4795fb8ad1aa977a81fd638caed30149e75cc4e708e4b8.scope: Deactivated successfully.
Jan 21 23:53:02 compute-0 sudo[256474]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:02 compute-0 sudo[256630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:53:02 compute-0 sudo[256630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:53:02 compute-0 sudo[256630]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:02 compute-0 sudo[256655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:53:02 compute-0 sudo[256655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:53:02 compute-0 sudo[256655]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:02.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:02 compute-0 sudo[256680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:53:02 compute-0 sudo[256680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:53:02 compute-0 sudo[256680]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:02 compute-0 sudo[256705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:53:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:02 compute-0 sudo[256705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:53:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:53:02 compute-0 podman[256770]: 2026-01-21 23:53:02.925905001 +0000 UTC m=+0.054682560 container create 59abd8bbec8ad74a78fc7b840a0dedc99a821eab1acc098e0436d4e6dc7cdda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:53:02 compute-0 systemd[1]: Started libpod-conmon-59abd8bbec8ad74a78fc7b840a0dedc99a821eab1acc098e0436d4e6dc7cdda4.scope.
Jan 21 23:53:02 compute-0 podman[256770]: 2026-01-21 23:53:02.899291979 +0000 UTC m=+0.028069608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:53:02 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:53:03 compute-0 podman[256770]: 2026-01-21 23:53:03.01301138 +0000 UTC m=+0.141789009 container init 59abd8bbec8ad74a78fc7b840a0dedc99a821eab1acc098e0436d4e6dc7cdda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 21 23:53:03 compute-0 podman[256770]: 2026-01-21 23:53:03.023802433 +0000 UTC m=+0.152580012 container start 59abd8bbec8ad74a78fc7b840a0dedc99a821eab1acc098e0436d4e6dc7cdda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:53:03 compute-0 infallible_poincare[256787]: 167 167
Jan 21 23:53:03 compute-0 podman[256770]: 2026-01-21 23:53:03.028285071 +0000 UTC m=+0.157062650 container attach 59abd8bbec8ad74a78fc7b840a0dedc99a821eab1acc098e0436d4e6dc7cdda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 21 23:53:03 compute-0 systemd[1]: libpod-59abd8bbec8ad74a78fc7b840a0dedc99a821eab1acc098e0436d4e6dc7cdda4.scope: Deactivated successfully.
Jan 21 23:53:03 compute-0 conmon[256787]: conmon 59abd8bbec8ad74a78fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-59abd8bbec8ad74a78fc7b840a0dedc99a821eab1acc098e0436d4e6dc7cdda4.scope/container/memory.events
Jan 21 23:53:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:03.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:03 compute-0 podman[256792]: 2026-01-21 23:53:03.10017771 +0000 UTC m=+0.046972991 container died 59abd8bbec8ad74a78fc7b840a0dedc99a821eab1acc098e0436d4e6dc7cdda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 23:53:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-54e74fd4f55dbd06e04d6885c4b318ca097fdfaf21888da224952ce92e29848c-merged.mount: Deactivated successfully.
Jan 21 23:53:03 compute-0 podman[256792]: 2026-01-21 23:53:03.145725777 +0000 UTC m=+0.092520998 container remove 59abd8bbec8ad74a78fc7b840a0dedc99a821eab1acc098e0436d4e6dc7cdda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 23:53:03 compute-0 systemd[1]: libpod-conmon-59abd8bbec8ad74a78fc7b840a0dedc99a821eab1acc098e0436d4e6dc7cdda4.scope: Deactivated successfully.
Jan 21 23:53:03 compute-0 podman[256814]: 2026-01-21 23:53:03.389802581 +0000 UTC m=+0.062697417 container create f466b55e382b324c45c3fdcc480f3b43afb8cf0dcdefbb885e27773581d075db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:53:03 compute-0 systemd[1]: Started libpod-conmon-f466b55e382b324c45c3fdcc480f3b43afb8cf0dcdefbb885e27773581d075db.scope.
Jan 21 23:53:03 compute-0 podman[256814]: 2026-01-21 23:53:03.359050142 +0000 UTC m=+0.031945038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:53:03 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:53:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de571aa2483077bce1edadc009b82b4ce14fece134b2820eecdbc69f3cc39480/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:53:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de571aa2483077bce1edadc009b82b4ce14fece134b2820eecdbc69f3cc39480/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:53:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de571aa2483077bce1edadc009b82b4ce14fece134b2820eecdbc69f3cc39480/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:53:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de571aa2483077bce1edadc009b82b4ce14fece134b2820eecdbc69f3cc39480/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:53:03 compute-0 podman[256814]: 2026-01-21 23:53:03.500019114 +0000 UTC m=+0.172913990 container init f466b55e382b324c45c3fdcc480f3b43afb8cf0dcdefbb885e27773581d075db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 23:53:03 compute-0 podman[256814]: 2026-01-21 23:53:03.512679544 +0000 UTC m=+0.185574350 container start f466b55e382b324c45c3fdcc480f3b43afb8cf0dcdefbb885e27773581d075db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 23:53:03 compute-0 podman[256814]: 2026-01-21 23:53:03.517471442 +0000 UTC m=+0.190366248 container attach f466b55e382b324c45c3fdcc480f3b43afb8cf0dcdefbb885e27773581d075db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:53:03 compute-0 ceph-mon[74318]: pgmap v1031: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]: {
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:     "1": [
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:         {
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:             "devices": [
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:                 "/dev/loop3"
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:             ],
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:             "lv_name": "ceph_lv0",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:             "lv_size": "7511998464",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:             "name": "ceph_lv0",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:             "tags": {
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:                 "ceph.cluster_name": "ceph",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:                 "ceph.crush_device_class": "",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:                 "ceph.encrypted": "0",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:                 "ceph.osd_id": "1",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:                 "ceph.type": "block",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:                 "ceph.vdo": "0"
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:             },
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:             "type": "block",
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:             "vg_name": "ceph_vg0"
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:         }
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]:     ]
Jan 21 23:53:04 compute-0 quizzical_leakey[256832]: }
Jan 21 23:53:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:04.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:04 compute-0 systemd[1]: libpod-f466b55e382b324c45c3fdcc480f3b43afb8cf0dcdefbb885e27773581d075db.scope: Deactivated successfully.
Jan 21 23:53:04 compute-0 podman[256814]: 2026-01-21 23:53:04.355295556 +0000 UTC m=+1.028190392 container died f466b55e382b324c45c3fdcc480f3b43afb8cf0dcdefbb885e27773581d075db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:53:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-de571aa2483077bce1edadc009b82b4ce14fece134b2820eecdbc69f3cc39480-merged.mount: Deactivated successfully.
Jan 21 23:53:04 compute-0 podman[256814]: 2026-01-21 23:53:04.435965786 +0000 UTC m=+1.108860632 container remove f466b55e382b324c45c3fdcc480f3b43afb8cf0dcdefbb885e27773581d075db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 21 23:53:04 compute-0 systemd[1]: libpod-conmon-f466b55e382b324c45c3fdcc480f3b43afb8cf0dcdefbb885e27773581d075db.scope: Deactivated successfully.
Jan 21 23:53:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:04 compute-0 sudo[256705]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:04 compute-0 sudo[256854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:53:04 compute-0 sudo[256854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:53:04 compute-0 sudo[256854]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:04 compute-0 sudo[256879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:53:04 compute-0 sudo[256879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:53:04 compute-0 sudo[256879]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:04 compute-0 sudo[256904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:53:04 compute-0 sudo[256904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:53:04 compute-0 sudo[256904]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:04 compute-0 sudo[256929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:53:04 compute-0 sudo[256929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:53:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:05.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:05 compute-0 podman[256992]: 2026-01-21 23:53:05.191377776 +0000 UTC m=+0.061433378 container create dd6c533d0fd4373547ba51cb5e2121e5c926020283ac499c15ab930a1de80044 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hoover, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:53:05 compute-0 systemd[1]: Started libpod-conmon-dd6c533d0fd4373547ba51cb5e2121e5c926020283ac499c15ab930a1de80044.scope.
Jan 21 23:53:05 compute-0 podman[256992]: 2026-01-21 23:53:05.161123602 +0000 UTC m=+0.031179284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:53:05 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:53:05 compute-0 podman[256992]: 2026-01-21 23:53:05.285878104 +0000 UTC m=+0.155933716 container init dd6c533d0fd4373547ba51cb5e2121e5c926020283ac499c15ab930a1de80044 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hoover, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 23:53:05 compute-0 podman[256992]: 2026-01-21 23:53:05.296702848 +0000 UTC m=+0.166758450 container start dd6c533d0fd4373547ba51cb5e2121e5c926020283ac499c15ab930a1de80044 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hoover, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 21 23:53:05 compute-0 podman[256992]: 2026-01-21 23:53:05.301106563 +0000 UTC m=+0.171162215 container attach dd6c533d0fd4373547ba51cb5e2121e5c926020283ac499c15ab930a1de80044 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hoover, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 21 23:53:05 compute-0 clever_hoover[257008]: 167 167
Jan 21 23:53:05 compute-0 systemd[1]: libpod-dd6c533d0fd4373547ba51cb5e2121e5c926020283ac499c15ab930a1de80044.scope: Deactivated successfully.
Jan 21 23:53:05 compute-0 podman[256992]: 2026-01-21 23:53:05.304105326 +0000 UTC m=+0.174160918 container died dd6c533d0fd4373547ba51cb5e2121e5c926020283ac499c15ab930a1de80044 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:53:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7aabb3dea02c1a735ab2f3d9955775bf63b2a0601231e1414e76aeac28f96fe-merged.mount: Deactivated successfully.
Jan 21 23:53:05 compute-0 podman[256992]: 2026-01-21 23:53:05.355256375 +0000 UTC m=+0.225311977 container remove dd6c533d0fd4373547ba51cb5e2121e5c926020283ac499c15ab930a1de80044 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hoover, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 23:53:05 compute-0 systemd[1]: libpod-conmon-dd6c533d0fd4373547ba51cb5e2121e5c926020283ac499c15ab930a1de80044.scope: Deactivated successfully.
Jan 21 23:53:05 compute-0 ceph-mon[74318]: pgmap v1032: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:05 compute-0 podman[257031]: 2026-01-21 23:53:05.617594413 +0000 UTC m=+0.086525432 container create 18339fa2bbb398b0b4ad8a972b501f5876f78ac8abf3e7cfbb3fa2bce556652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 23:53:05 compute-0 systemd[1]: Started libpod-conmon-18339fa2bbb398b0b4ad8a972b501f5876f78ac8abf3e7cfbb3fa2bce556652f.scope.
Jan 21 23:53:05 compute-0 podman[257031]: 2026-01-21 23:53:05.59673172 +0000 UTC m=+0.065662749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:53:05 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46c3886530862abecced357f971709a4d0b9c6ef4de684600449ce4b556ccffb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46c3886530862abecced357f971709a4d0b9c6ef4de684600449ce4b556ccffb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46c3886530862abecced357f971709a4d0b9c6ef4de684600449ce4b556ccffb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46c3886530862abecced357f971709a4d0b9c6ef4de684600449ce4b556ccffb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:53:05 compute-0 podman[257031]: 2026-01-21 23:53:05.728434256 +0000 UTC m=+0.197365315 container init 18339fa2bbb398b0b4ad8a972b501f5876f78ac8abf3e7cfbb3fa2bce556652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_saha, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:53:05 compute-0 podman[257031]: 2026-01-21 23:53:05.736388481 +0000 UTC m=+0.205319470 container start 18339fa2bbb398b0b4ad8a972b501f5876f78ac8abf3e7cfbb3fa2bce556652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 21 23:53:05 compute-0 podman[257031]: 2026-01-21 23:53:05.740175457 +0000 UTC m=+0.209106526 container attach 18339fa2bbb398b0b4ad8a972b501f5876f78ac8abf3e7cfbb3fa2bce556652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_saha, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:53:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:53:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:06.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:53:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:06 compute-0 ceph-mon[74318]: pgmap v1033: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:06 compute-0 agitated_saha[257047]: {
Jan 21 23:53:06 compute-0 agitated_saha[257047]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:53:06 compute-0 agitated_saha[257047]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:53:06 compute-0 agitated_saha[257047]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:53:06 compute-0 agitated_saha[257047]:         "osd_id": 1,
Jan 21 23:53:06 compute-0 agitated_saha[257047]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:53:06 compute-0 agitated_saha[257047]:         "type": "bluestore"
Jan 21 23:53:06 compute-0 agitated_saha[257047]:     }
Jan 21 23:53:06 compute-0 agitated_saha[257047]: }
Jan 21 23:53:06 compute-0 systemd[1]: libpod-18339fa2bbb398b0b4ad8a972b501f5876f78ac8abf3e7cfbb3fa2bce556652f.scope: Deactivated successfully.
Jan 21 23:53:06 compute-0 podman[257031]: 2026-01-21 23:53:06.624318662 +0000 UTC m=+1.093249701 container died 18339fa2bbb398b0b4ad8a972b501f5876f78ac8abf3e7cfbb3fa2bce556652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_saha, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:53:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-46c3886530862abecced357f971709a4d0b9c6ef4de684600449ce4b556ccffb-merged.mount: Deactivated successfully.
Jan 21 23:53:06 compute-0 podman[257031]: 2026-01-21 23:53:06.686102808 +0000 UTC m=+1.155033837 container remove 18339fa2bbb398b0b4ad8a972b501f5876f78ac8abf3e7cfbb3fa2bce556652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_saha, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 21 23:53:06 compute-0 systemd[1]: libpod-conmon-18339fa2bbb398b0b4ad8a972b501f5876f78ac8abf3e7cfbb3fa2bce556652f.scope: Deactivated successfully.
Jan 21 23:53:06 compute-0 sudo[256929]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:06 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:53:06 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:53:06 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:53:06 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:53:06 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev ec1fa615-eb1c-4b7a-bb5c-70179a24fbe6 does not exist
Jan 21 23:53:06 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 0f4d8e7f-f8fd-43a1-a8cc-ed7d6cf307d9 does not exist
Jan 21 23:53:06 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 30504e59-184c-47cf-af6b-125467f705ab does not exist
Jan 21 23:53:06 compute-0 sudo[257079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:53:06 compute-0 sudo[257079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:53:06 compute-0 sudo[257079]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:06 compute-0 sudo[257104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:53:06 compute-0 sudo[257104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:53:06 compute-0 sudo[257104]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:53:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:07.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:53:07 compute-0 podman[257128]: 2026-01-21 23:53:07.115466493 +0000 UTC m=+0.104179027 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 21 23:53:07 compute-0 sudo[257149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:53:07 compute-0 sudo[257149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:53:07 compute-0 sudo[257149]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:07 compute-0 sudo[257174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:53:07 compute-0 sudo[257174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:53:07 compute-0 sudo[257174]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:53:07 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:53:07 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:53:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:53:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:08.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:53:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:08 compute-0 ceph-mon[74318]: pgmap v1034: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:08 compute-0 nova_compute[247516]: 2026-01-21 23:53:08.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:53:08 compute-0 nova_compute[247516]: 2026-01-21 23:53:08.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 23:53:08 compute-0 nova_compute[247516]: 2026-01-21 23:53:08.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 23:53:09 compute-0 nova_compute[247516]: 2026-01-21 23:53:09.010 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 23:53:09 compute-0 nova_compute[247516]: 2026-01-21 23:53:09.011 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:53:09 compute-0 nova_compute[247516]: 2026-01-21 23:53:09.011 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 23:53:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:09.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:53:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:53:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:53:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:53:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:53:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:53:10 compute-0 nova_compute[247516]: 2026-01-21 23:53:10.007 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:53:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:10.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:10 compute-0 nova_compute[247516]: 2026-01-21 23:53:10.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:53:10 compute-0 nova_compute[247516]: 2026-01-21 23:53:10.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:53:10 compute-0 nova_compute[247516]: 2026-01-21 23:53:10.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:53:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:11.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:11 compute-0 ceph-mon[74318]: pgmap v1035: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:11 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1954457734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:53:11 compute-0 nova_compute[247516]: 2026-01-21 23:53:11.988 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:53:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:12.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:12 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2867818168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:53:12 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/146466926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:53:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:53:12 compute-0 nova_compute[247516]: 2026-01-21 23:53:12.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:53:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:53:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:13.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:53:13 compute-0 ceph-mon[74318]: pgmap v1036: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:13 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1934989059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:53:13 compute-0 nova_compute[247516]: 2026-01-21 23:53:13.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:53:14 compute-0 nova_compute[247516]: 2026-01-21 23:53:14.026 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:53:14 compute-0 nova_compute[247516]: 2026-01-21 23:53:14.027 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:53:14 compute-0 nova_compute[247516]: 2026-01-21 23:53:14.027 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:53:14 compute-0 nova_compute[247516]: 2026-01-21 23:53:14.028 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 23:53:14 compute-0 nova_compute[247516]: 2026-01-21 23:53:14.029 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:53:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:53:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:14.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:53:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:53:14 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1139995859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:53:14 compute-0 nova_compute[247516]: 2026-01-21 23:53:14.556 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:53:14 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1139995859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:53:14 compute-0 nova_compute[247516]: 2026-01-21 23:53:14.843 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 23:53:14 compute-0 nova_compute[247516]: 2026-01-21 23:53:14.845 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5144MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 23:53:14 compute-0 nova_compute[247516]: 2026-01-21 23:53:14.846 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:53:14 compute-0 nova_compute[247516]: 2026-01-21 23:53:14.847 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:53:14 compute-0 nova_compute[247516]: 2026-01-21 23:53:14.948 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 23:53:14 compute-0 nova_compute[247516]: 2026-01-21 23:53:14.949 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 23:53:14 compute-0 nova_compute[247516]: 2026-01-21 23:53:14.971 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:53:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:53:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:15.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:53:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:53:15 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2334927219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:53:15 compute-0 nova_compute[247516]: 2026-01-21 23:53:15.442 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:53:15 compute-0 nova_compute[247516]: 2026-01-21 23:53:15.451 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 23:53:15 compute-0 nova_compute[247516]: 2026-01-21 23:53:15.475 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 23:53:15 compute-0 nova_compute[247516]: 2026-01-21 23:53:15.478 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 23:53:15 compute-0 nova_compute[247516]: 2026-01-21 23:53:15.478 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:53:15 compute-0 ceph-mon[74318]: pgmap v1037: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:15 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2334927219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:53:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:16.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:16 compute-0 ceph-mon[74318]: pgmap v1038: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:53:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:17.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:53:17 compute-0 nova_compute[247516]: 2026-01-21 23:53:17.479 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:53:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:53:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:18.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:53:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:19.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:53:19 compute-0 ceph-mon[74318]: pgmap v1039: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:20.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:20 compute-0 ceph-mon[74318]: pgmap v1040: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:21.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:22.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:53:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:23.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:23 compute-0 ceph-mon[74318]: pgmap v1041: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:53:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:24.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:53:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:25.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:25 compute-0 ceph-mon[74318]: pgmap v1042: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 21 23:53:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4145194391' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:53:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 21 23:53:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4145194391' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:53:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:26.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/4145194391' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:53:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/4145194391' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:53:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:27.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:27 compute-0 sudo[257255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:53:27 compute-0 sudo[257255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:53:27 compute-0 sudo[257255]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:27 compute-0 sudo[257280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:53:27 compute-0 sudo[257280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:53:27 compute-0 sudo[257280]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:27 compute-0 ceph-mon[74318]: pgmap v1043: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:53:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:28.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:28 compute-0 ceph-mon[74318]: pgmap v1044: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:53:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:29.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:53:30 compute-0 podman[257306]: 2026-01-21 23:53:30.066713531 +0000 UTC m=+0.170754152 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 21 23:53:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:30.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:31.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:31 compute-0 ceph-mon[74318]: pgmap v1045: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:32.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:53:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:33.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:33 compute-0 ceph-mon[74318]: pgmap v1046: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:34.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:53:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:35.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:53:35 compute-0 ceph-mon[74318]: pgmap v1047: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:53:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:36.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:53:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:36 compute-0 ceph-mon[74318]: pgmap v1048: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:37.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:53:37 compute-0 podman[257338]: 2026-01-21 23:53:37.953403174 +0000 UTC m=+0.062734408 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 23:53:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:38.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:38 compute-0 ceph-mon[74318]: pgmap v1049: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:53:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:53:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:39.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:53:39
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'volumes', 'vms', 'default.rgw.meta', 'images']
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:53:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:53:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Jan 21 23:53:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Jan 21 23:53:39 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Jan 21 23:53:40 compute-0 ceph-mgr[74614]: client.0 ms_handle_reset on v2:192.168.122.100:6800/934453051
Jan 21 23:53:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:53:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:40.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:53:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 9.4 KiB/s rd, 1.4 KiB/s wr, 13 op/s
Jan 21 23:53:40 compute-0 ceph-mon[74318]: osdmap e158: 3 total, 3 up, 3 in
Jan 21 23:53:40 compute-0 ceph-mon[74318]: pgmap v1051: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 9.4 KiB/s rd, 1.4 KiB/s wr, 13 op/s
Jan 21 23:53:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:41.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:42.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 1.4 KiB/s wr, 14 op/s
Jan 21 23:53:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:53:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:53:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:43.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:53:43 compute-0 ceph-mon[74318]: pgmap v1052: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 1.4 KiB/s wr, 14 op/s
Jan 21 23:53:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:44.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 1.4 KiB/s wr, 14 op/s
Jan 21 23:53:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:45.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Jan 21 23:53:45 compute-0 ceph-mon[74318]: pgmap v1053: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 1.4 KiB/s wr, 14 op/s
Jan 21 23:53:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Jan 21 23:53:45 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Jan 21 23:53:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:53:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:46.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:53:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.2 KiB/s wr, 19 op/s
Jan 21 23:53:46 compute-0 ceph-mon[74318]: osdmap e159: 3 total, 3 up, 3 in
Jan 21 23:53:46 compute-0 ceph-mon[74318]: pgmap v1055: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.2 KiB/s wr, 19 op/s
Jan 21 23:53:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:47.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:47 compute-0 sudo[257364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:53:47 compute-0 sudo[257364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:53:47 compute-0 sudo[257364]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:53:47 compute-0 sudo[257389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:53:47 compute-0 sudo[257389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:53:47 compute-0 sudo[257389]: pam_unix(sudo:session): session closed for user root
Jan 21 23:53:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:48.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 1.9 KiB/s wr, 18 op/s
Jan 21 23:53:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:53:48.750 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:53:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:53:48.751 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:53:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:53:48.752 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:53:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:49.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Jan 21 23:53:49 compute-0 ceph-mon[74318]: pgmap v1056: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 1.9 KiB/s wr, 18 op/s
Jan 21 23:53:49 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Jan 21 23:53:49 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Jan 21 23:53:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:50.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 4.1 KiB/s wr, 37 op/s
Jan 21 23:53:50 compute-0 ceph-mon[74318]: osdmap e160: 3 total, 3 up, 3 in
Jan 21 23:53:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:53:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:51.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:53:51 compute-0 ceph-mon[74318]: pgmap v1058: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 4.1 KiB/s wr, 37 op/s
Jan 21 23:53:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:53:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:52.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:53:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 4.2 KiB/s wr, 38 op/s
Jan 21 23:53:52 compute-0 ceph-mon[74318]: pgmap v1059: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 4.2 KiB/s wr, 38 op/s
Jan 21 23:53:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:53:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:53.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019053237832681927 of space, bias 1.0, pg target 0.5715971349804578 quantized to 32 (current 32)
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:53:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:53:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:54.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:53:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 3.4 KiB/s wr, 32 op/s
Jan 21 23:53:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:55.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:55 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:53:55.447 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 23:53:55 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:53:55.450 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 23:53:55 compute-0 ceph-mon[74318]: pgmap v1060: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 3.4 KiB/s wr, 32 op/s
Jan 21 23:53:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:56.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:56 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:53:56.453 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 23:53:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 3.0 KiB/s wr, 29 op/s
Jan 21 23:53:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:57.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:57 compute-0 ceph-mon[74318]: pgmap v1061: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 3.0 KiB/s wr, 29 op/s
Jan 21 23:53:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:53:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:53:58.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.4 KiB/s wr, 25 op/s
Jan 21 23:53:58 compute-0 ceph-mon[74318]: pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.4 KiB/s wr, 25 op/s
Jan 21 23:53:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:53:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:53:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:53:59.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:53:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Jan 21 23:53:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Jan 21 23:53:59 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:53:59.664902) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039639664971, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2163, "num_deletes": 254, "total_data_size": 3881780, "memory_usage": 3947200, "flush_reason": "Manual Compaction"}
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039639695856, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 3803347, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22013, "largest_seqno": 24175, "table_properties": {"data_size": 3793575, "index_size": 6266, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19740, "raw_average_key_size": 20, "raw_value_size": 3774093, "raw_average_value_size": 3894, "num_data_blocks": 278, "num_entries": 969, "num_filter_entries": 969, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769039429, "oldest_key_time": 1769039429, "file_creation_time": 1769039639, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 30996 microseconds, and 10361 cpu microseconds.
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:53:59.695913) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 3803347 bytes OK
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:53:59.695934) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:53:59.705075) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:53:59.705089) EVENT_LOG_v1 {"time_micros": 1769039639705085, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:53:59.705104) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 3873069, prev total WAL file size 3873069, number of live WAL files 2.
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:53:59.705988) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(3714KB)], [53(7301KB)]
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039639706088, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11279762, "oldest_snapshot_seqno": -1}
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4862 keys, 9223540 bytes, temperature: kUnknown
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039639778906, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9223540, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9189504, "index_size": 20806, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 122750, "raw_average_key_size": 25, "raw_value_size": 9099828, "raw_average_value_size": 1871, "num_data_blocks": 852, "num_entries": 4862, "num_filter_entries": 4862, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769039639, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:53:59.779258) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9223540 bytes
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:53:59.781221) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.6 rd, 126.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 7.1 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(5.4) write-amplify(2.4) OK, records in: 5386, records dropped: 524 output_compression: NoCompression
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:53:59.781242) EVENT_LOG_v1 {"time_micros": 1769039639781231, "job": 28, "event": "compaction_finished", "compaction_time_micros": 72953, "compaction_time_cpu_micros": 28527, "output_level": 6, "num_output_files": 1, "total_output_size": 9223540, "num_input_records": 5386, "num_output_records": 4862, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039639782433, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039639784301, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:53:59.705926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:53:59.784394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:53:59.784400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:53:59.784402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:53:59.784404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:53:59 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:53:59.784406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:54:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:00.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 9.5 KiB/s rd, 1.4 KiB/s wr, 14 op/s
Jan 21 23:54:00 compute-0 ceph-mon[74318]: osdmap e161: 3 total, 3 up, 3 in
Jan 21 23:54:00 compute-0 ceph-mon[74318]: pgmap v1064: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 9.5 KiB/s rd, 1.4 KiB/s wr, 14 op/s
Jan 21 23:54:01 compute-0 podman[257420]: 2026-01-21 23:54:01.003221196 +0000 UTC m=+0.103344303 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 21 23:54:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:01.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Jan 21 23:54:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Jan 21 23:54:01 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Jan 21 23:54:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:54:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:02.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:54:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 2.2 KiB/s wr, 24 op/s
Jan 21 23:54:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Jan 21 23:54:02 compute-0 ceph-mon[74318]: osdmap e162: 3 total, 3 up, 3 in
Jan 21 23:54:02 compute-0 ceph-mon[74318]: pgmap v1066: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 2.2 KiB/s wr, 24 op/s
Jan 21 23:54:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Jan 21 23:54:02 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Jan 21 23:54:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:54:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:03.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:03 compute-0 ceph-mon[74318]: osdmap e163: 3 total, 3 up, 3 in
Jan 21 23:54:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:04.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 3.0 KiB/s wr, 31 op/s
Jan 21 23:54:04 compute-0 ceph-mon[74318]: pgmap v1068: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 3.0 KiB/s wr, 31 op/s
Jan 21 23:54:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:05.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:54:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:06.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:54:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 79 KiB/s rd, 6.1 KiB/s wr, 108 op/s
Jan 21 23:54:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:54:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:07.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:54:07 compute-0 sudo[257450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:54:07 compute-0 sudo[257450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:07 compute-0 sudo[257450]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:07 compute-0 ceph-mon[74318]: pgmap v1069: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 79 KiB/s rd, 6.1 KiB/s wr, 108 op/s
Jan 21 23:54:07 compute-0 sudo[257475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:54:07 compute-0 sudo[257475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:07 compute-0 sudo[257475]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:07 compute-0 sudo[257500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:54:07 compute-0 sudo[257500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:07 compute-0 sudo[257500]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:54:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Jan 21 23:54:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Jan 21 23:54:07 compute-0 sudo[257525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:54:07 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Jan 21 23:54:07 compute-0 sudo[257525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:07 compute-0 sudo[257550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:54:07 compute-0 sudo[257550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:07 compute-0 sudo[257550]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:08 compute-0 sudo[257575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:54:08 compute-0 sudo[257575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:08 compute-0 sudo[257575]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:08 compute-0 podman[257611]: 2026-01-21 23:54:08.124461759 +0000 UTC m=+0.100214054 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 23:54:08 compute-0 sudo[257525]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:08.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:08 compute-0 sudo[257651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:54:08 compute-0 sudo[257651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:08 compute-0 sudo[257651]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 4.3 KiB/s wr, 90 op/s
Jan 21 23:54:08 compute-0 sudo[257676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:54:08 compute-0 sudo[257676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:08 compute-0 sudo[257676]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:08 compute-0 sudo[257701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:54:08 compute-0 sudo[257701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:08 compute-0 sudo[257701]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:08 compute-0 sudo[257726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 21 23:54:08 compute-0 sudo[257726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:08 compute-0 ceph-mon[74318]: osdmap e164: 3 total, 3 up, 3 in
Jan 21 23:54:08 compute-0 ceph-mon[74318]: pgmap v1071: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 4.3 KiB/s wr, 90 op/s
Jan 21 23:54:09 compute-0 sudo[257726]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:54:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:54:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:54:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:54:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:09.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:54:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:54:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:54:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:54:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:54:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:54:09 compute-0 nova_compute[247516]: 2026-01-21 23:54:09.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:54:09 compute-0 nova_compute[247516]: 2026-01-21 23:54:09.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 23:54:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:54:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:54:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:54:10 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:54:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:54:10 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:54:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:10.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 53 KiB/s rd, 3.0 KiB/s wr, 69 op/s
Jan 21 23:54:10 compute-0 nova_compute[247516]: 2026-01-21 23:54:10.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:54:10 compute-0 nova_compute[247516]: 2026-01-21 23:54:10.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 23:54:10 compute-0 nova_compute[247516]: 2026-01-21 23:54:10.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 23:54:11 compute-0 nova_compute[247516]: 2026-01-21 23:54:11.031 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 23:54:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:54:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:54:11 compute-0 ceph-mon[74318]: pgmap v1072: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 53 KiB/s rd, 3.0 KiB/s wr, 69 op/s
Jan 21 23:54:11 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/4002626561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:54:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:54:11 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:54:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:54:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:54:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:54:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:54:11 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 0c28a244-307a-4d23-a527-d23e55c61bb2 does not exist
Jan 21 23:54:11 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev fe7e5908-64da-4964-b316-3101b3b74222 does not exist
Jan 21 23:54:11 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 32c77a8e-69aa-4d53-9a42-24f45285af73 does not exist
Jan 21 23:54:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:54:11 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:54:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:54:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:54:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:54:11 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:54:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:11.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:11 compute-0 sudo[257771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:54:11 compute-0 sudo[257771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:11 compute-0 sudo[257771]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:11 compute-0 sudo[257796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:54:11 compute-0 sudo[257796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:11 compute-0 sudo[257796]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:11 compute-0 sudo[257821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:54:11 compute-0 sudo[257821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:11 compute-0 sudo[257821]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:11 compute-0 sudo[257846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:54:11 compute-0 sudo[257846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:11 compute-0 podman[257911]: 2026-01-21 23:54:11.934403473 +0000 UTC m=+0.080242248 container create d674461387ffb62a3e2695cb51e87d2f9f0c9c7ee5c7e0bd932a53bdc3fb8acf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:54:11 compute-0 systemd[1]: Started libpod-conmon-d674461387ffb62a3e2695cb51e87d2f9f0c9c7ee5c7e0bd932a53bdc3fb8acf.scope.
Jan 21 23:54:11 compute-0 nova_compute[247516]: 2026-01-21 23:54:11.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:54:11 compute-0 nova_compute[247516]: 2026-01-21 23:54:11.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:54:11 compute-0 podman[257911]: 2026-01-21 23:54:11.903112176 +0000 UTC m=+0.048951041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:54:12 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:54:12 compute-0 podman[257911]: 2026-01-21 23:54:12.054622774 +0000 UTC m=+0.200461589 container init d674461387ffb62a3e2695cb51e87d2f9f0c9c7ee5c7e0bd932a53bdc3fb8acf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 21 23:54:12 compute-0 podman[257911]: 2026-01-21 23:54:12.06582316 +0000 UTC m=+0.211661945 container start d674461387ffb62a3e2695cb51e87d2f9f0c9c7ee5c7e0bd932a53bdc3fb8acf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 21 23:54:12 compute-0 podman[257911]: 2026-01-21 23:54:12.070491003 +0000 UTC m=+0.216329788 container attach d674461387ffb62a3e2695cb51e87d2f9f0c9c7ee5c7e0bd932a53bdc3fb8acf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ganguly, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 23:54:12 compute-0 strange_ganguly[257927]: 167 167
Jan 21 23:54:12 compute-0 systemd[1]: libpod-d674461387ffb62a3e2695cb51e87d2f9f0c9c7ee5c7e0bd932a53bdc3fb8acf.scope: Deactivated successfully.
Jan 21 23:54:12 compute-0 podman[257911]: 2026-01-21 23:54:12.074307332 +0000 UTC m=+0.220146147 container died d674461387ffb62a3e2695cb51e87d2f9f0c9c7ee5c7e0bd932a53bdc3fb8acf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 21 23:54:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:54:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:54:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:54:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:54:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:54:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:54:12 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/981800427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:54:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3704e83108ba03bf6678c58bdebf990dfa27f053ac7a816c00b4ff8c2e3bcf32-merged.mount: Deactivated successfully.
Jan 21 23:54:12 compute-0 podman[257911]: 2026-01-21 23:54:12.132672283 +0000 UTC m=+0.278511068 container remove d674461387ffb62a3e2695cb51e87d2f9f0c9c7ee5c7e0bd932a53bdc3fb8acf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ganguly, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:54:12 compute-0 systemd[1]: libpod-conmon-d674461387ffb62a3e2695cb51e87d2f9f0c9c7ee5c7e0bd932a53bdc3fb8acf.scope: Deactivated successfully.
Jan 21 23:54:12 compute-0 podman[257951]: 2026-01-21 23:54:12.370534346 +0000 UTC m=+0.053901465 container create 0051bea8317ae1ecf88617ac71a50627e0a43e1f20cdeed751eb60b8c75bd18d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hofstadter, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:54:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:54:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:12.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:54:12 compute-0 systemd[1]: Started libpod-conmon-0051bea8317ae1ecf88617ac71a50627e0a43e1f20cdeed751eb60b8c75bd18d.scope.
Jan 21 23:54:12 compute-0 podman[257951]: 2026-01-21 23:54:12.341712276 +0000 UTC m=+0.025079415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:54:12 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c24390fc123fb2905eecb2d2b5b6de1dd3955b46edb5cd5ae1215f510a8a796/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c24390fc123fb2905eecb2d2b5b6de1dd3955b46edb5cd5ae1215f510a8a796/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c24390fc123fb2905eecb2d2b5b6de1dd3955b46edb5cd5ae1215f510a8a796/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c24390fc123fb2905eecb2d2b5b6de1dd3955b46edb5cd5ae1215f510a8a796/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c24390fc123fb2905eecb2d2b5b6de1dd3955b46edb5cd5ae1215f510a8a796/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:54:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 43 KiB/s rd, 2.5 KiB/s wr, 57 op/s
Jan 21 23:54:12 compute-0 podman[257951]: 2026-01-21 23:54:12.500964353 +0000 UTC m=+0.184331552 container init 0051bea8317ae1ecf88617ac71a50627e0a43e1f20cdeed751eb60b8c75bd18d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 21 23:54:12 compute-0 podman[257951]: 2026-01-21 23:54:12.511776656 +0000 UTC m=+0.195143805 container start 0051bea8317ae1ecf88617ac71a50627e0a43e1f20cdeed751eb60b8c75bd18d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 21 23:54:12 compute-0 podman[257951]: 2026-01-21 23:54:12.515856932 +0000 UTC m=+0.199224091 container attach 0051bea8317ae1ecf88617ac71a50627e0a43e1f20cdeed751eb60b8c75bd18d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hofstadter, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 21 23:54:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:54:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Jan 21 23:54:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Jan 21 23:54:12 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Jan 21 23:54:12 compute-0 nova_compute[247516]: 2026-01-21 23:54:12.988 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:54:12 compute-0 nova_compute[247516]: 2026-01-21 23:54:12.990 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:54:13 compute-0 ceph-mon[74318]: pgmap v1073: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 43 KiB/s rd, 2.5 KiB/s wr, 57 op/s
Jan 21 23:54:13 compute-0 ceph-mon[74318]: osdmap e165: 3 total, 3 up, 3 in
Jan 21 23:54:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:13.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:13 compute-0 kind_hofstadter[257967]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:54:13 compute-0 kind_hofstadter[257967]: --> relative data size: 1.0
Jan 21 23:54:13 compute-0 kind_hofstadter[257967]: --> All data devices are unavailable
Jan 21 23:54:13 compute-0 systemd[1]: libpod-0051bea8317ae1ecf88617ac71a50627e0a43e1f20cdeed751eb60b8c75bd18d.scope: Deactivated successfully.
Jan 21 23:54:13 compute-0 podman[257951]: 2026-01-21 23:54:13.419446726 +0000 UTC m=+1.102813875 container died 0051bea8317ae1ecf88617ac71a50627e0a43e1f20cdeed751eb60b8c75bd18d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hofstadter, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:54:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c24390fc123fb2905eecb2d2b5b6de1dd3955b46edb5cd5ae1215f510a8a796-merged.mount: Deactivated successfully.
Jan 21 23:54:13 compute-0 podman[257951]: 2026-01-21 23:54:13.514538972 +0000 UTC m=+1.197906101 container remove 0051bea8317ae1ecf88617ac71a50627e0a43e1f20cdeed751eb60b8c75bd18d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hofstadter, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 23:54:13 compute-0 systemd[1]: libpod-conmon-0051bea8317ae1ecf88617ac71a50627e0a43e1f20cdeed751eb60b8c75bd18d.scope: Deactivated successfully.
Jan 21 23:54:13 compute-0 sudo[257846]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:13 compute-0 sudo[257995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:54:13 compute-0 sudo[257995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:13 compute-0 sudo[257995]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:13 compute-0 sudo[258020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:54:13 compute-0 sudo[258020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:13 compute-0 sudo[258020]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:13 compute-0 sudo[258045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:54:13 compute-0 sudo[258045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:13 compute-0 sudo[258045]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:13 compute-0 sudo[258070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:54:13 compute-0 sudo[258070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:14 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2945077550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:54:14 compute-0 podman[258136]: 2026-01-21 23:54:14.336227507 +0000 UTC m=+0.058741954 container create 89839bbc562be6bd7e4498eade1c0fe08851599bdd61de9bd65f18f439ae357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elbakyan, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 23:54:14 compute-0 systemd[1]: Started libpod-conmon-89839bbc562be6bd7e4498eade1c0fe08851599bdd61de9bd65f18f439ae357a.scope.
Jan 21 23:54:14 compute-0 podman[258136]: 2026-01-21 23:54:14.310241315 +0000 UTC m=+0.032755782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:54:14 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:54:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:14.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:14 compute-0 podman[258136]: 2026-01-21 23:54:14.426154693 +0000 UTC m=+0.148669220 container init 89839bbc562be6bd7e4498eade1c0fe08851599bdd61de9bd65f18f439ae357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elbakyan, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 23:54:14 compute-0 podman[258136]: 2026-01-21 23:54:14.434921364 +0000 UTC m=+0.157435841 container start 89839bbc562be6bd7e4498eade1c0fe08851599bdd61de9bd65f18f439ae357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 21 23:54:14 compute-0 podman[258136]: 2026-01-21 23:54:14.438814944 +0000 UTC m=+0.161329501 container attach 89839bbc562be6bd7e4498eade1c0fe08851599bdd61de9bd65f18f439ae357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elbakyan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 21 23:54:14 compute-0 keen_elbakyan[258153]: 167 167
Jan 21 23:54:14 compute-0 systemd[1]: libpod-89839bbc562be6bd7e4498eade1c0fe08851599bdd61de9bd65f18f439ae357a.scope: Deactivated successfully.
Jan 21 23:54:14 compute-0 podman[258136]: 2026-01-21 23:54:14.442792177 +0000 UTC m=+0.165306624 container died 89839bbc562be6bd7e4498eade1c0fe08851599bdd61de9bd65f18f439ae357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 21 23:54:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c840f2683875f544927af39aa344b01ffc7a536cf990a2cd7798ffc1ba2ba80-merged.mount: Deactivated successfully.
Jan 21 23:54:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 21 23:54:14 compute-0 podman[258136]: 2026-01-21 23:54:14.488003833 +0000 UTC m=+0.210518310 container remove 89839bbc562be6bd7e4498eade1c0fe08851599bdd61de9bd65f18f439ae357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 21 23:54:14 compute-0 systemd[1]: libpod-conmon-89839bbc562be6bd7e4498eade1c0fe08851599bdd61de9bd65f18f439ae357a.scope: Deactivated successfully.
Jan 21 23:54:14 compute-0 podman[258176]: 2026-01-21 23:54:14.71205928 +0000 UTC m=+0.056745154 container create 1116596ed1b56f36f50aa1c52b7ee168216d7ae8cc46559d17d8693fe9be1436 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:54:14 compute-0 systemd[1]: Started libpod-conmon-1116596ed1b56f36f50aa1c52b7ee168216d7ae8cc46559d17d8693fe9be1436.scope.
Jan 21 23:54:14 compute-0 podman[258176]: 2026-01-21 23:54:14.684984993 +0000 UTC m=+0.029670907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:54:14 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:54:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2a54a31a8e13ce09f3fa6c7f391f4581ff270b73a2f30fbb2b65d8aa6f19db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:54:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2a54a31a8e13ce09f3fa6c7f391f4581ff270b73a2f30fbb2b65d8aa6f19db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:54:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2a54a31a8e13ce09f3fa6c7f391f4581ff270b73a2f30fbb2b65d8aa6f19db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:54:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2a54a31a8e13ce09f3fa6c7f391f4581ff270b73a2f30fbb2b65d8aa6f19db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:54:14 compute-0 podman[258176]: 2026-01-21 23:54:14.821305362 +0000 UTC m=+0.165991276 container init 1116596ed1b56f36f50aa1c52b7ee168216d7ae8cc46559d17d8693fe9be1436 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kare, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:54:14 compute-0 podman[258176]: 2026-01-21 23:54:14.835433978 +0000 UTC m=+0.180119852 container start 1116596ed1b56f36f50aa1c52b7ee168216d7ae8cc46559d17d8693fe9be1436 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:54:14 compute-0 podman[258176]: 2026-01-21 23:54:14.840201375 +0000 UTC m=+0.184887309 container attach 1116596ed1b56f36f50aa1c52b7ee168216d7ae8cc46559d17d8693fe9be1436 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kare, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 23:54:14 compute-0 nova_compute[247516]: 2026-01-21 23:54:14.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:54:15 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1986910469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:54:15 compute-0 ceph-mon[74318]: pgmap v1075: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 21 23:54:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:15.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:15 compute-0 friendly_kare[258193]: {
Jan 21 23:54:15 compute-0 friendly_kare[258193]:     "1": [
Jan 21 23:54:15 compute-0 friendly_kare[258193]:         {
Jan 21 23:54:15 compute-0 friendly_kare[258193]:             "devices": [
Jan 21 23:54:15 compute-0 friendly_kare[258193]:                 "/dev/loop3"
Jan 21 23:54:15 compute-0 friendly_kare[258193]:             ],
Jan 21 23:54:15 compute-0 friendly_kare[258193]:             "lv_name": "ceph_lv0",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:             "lv_size": "7511998464",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:             "name": "ceph_lv0",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:             "tags": {
Jan 21 23:54:15 compute-0 friendly_kare[258193]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:                 "ceph.cluster_name": "ceph",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:                 "ceph.crush_device_class": "",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:                 "ceph.encrypted": "0",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:                 "ceph.osd_id": "1",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:                 "ceph.type": "block",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:                 "ceph.vdo": "0"
Jan 21 23:54:15 compute-0 friendly_kare[258193]:             },
Jan 21 23:54:15 compute-0 friendly_kare[258193]:             "type": "block",
Jan 21 23:54:15 compute-0 friendly_kare[258193]:             "vg_name": "ceph_vg0"
Jan 21 23:54:15 compute-0 friendly_kare[258193]:         }
Jan 21 23:54:15 compute-0 friendly_kare[258193]:     ]
Jan 21 23:54:15 compute-0 friendly_kare[258193]: }
Jan 21 23:54:15 compute-0 systemd[1]: libpod-1116596ed1b56f36f50aa1c52b7ee168216d7ae8cc46559d17d8693fe9be1436.scope: Deactivated successfully.
Jan 21 23:54:15 compute-0 podman[258176]: 2026-01-21 23:54:15.602990483 +0000 UTC m=+0.947676317 container died 1116596ed1b56f36f50aa1c52b7ee168216d7ae8cc46559d17d8693fe9be1436 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kare, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:54:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a2a54a31a8e13ce09f3fa6c7f391f4581ff270b73a2f30fbb2b65d8aa6f19db-merged.mount: Deactivated successfully.
Jan 21 23:54:15 compute-0 podman[258176]: 2026-01-21 23:54:15.661147548 +0000 UTC m=+1.005833382 container remove 1116596ed1b56f36f50aa1c52b7ee168216d7ae8cc46559d17d8693fe9be1436 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kare, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:54:15 compute-0 systemd[1]: libpod-conmon-1116596ed1b56f36f50aa1c52b7ee168216d7ae8cc46559d17d8693fe9be1436.scope: Deactivated successfully.
Jan 21 23:54:15 compute-0 sudo[258070]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:15 compute-0 sudo[258214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:54:15 compute-0 sudo[258214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:15 compute-0 sudo[258214]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:15 compute-0 sudo[258239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:54:15 compute-0 sudo[258239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:15 compute-0 sudo[258239]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:15 compute-0 sudo[258264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:54:15 compute-0 sudo[258264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:15 compute-0 sudo[258264]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:15 compute-0 sudo[258289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:54:15 compute-0 sudo[258289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:15 compute-0 nova_compute[247516]: 2026-01-21 23:54:15.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:54:15 compute-0 nova_compute[247516]: 2026-01-21 23:54:15.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:54:16 compute-0 nova_compute[247516]: 2026-01-21 23:54:16.020 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:54:16 compute-0 nova_compute[247516]: 2026-01-21 23:54:16.021 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:54:16 compute-0 nova_compute[247516]: 2026-01-21 23:54:16.021 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:54:16 compute-0 nova_compute[247516]: 2026-01-21 23:54:16.021 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 23:54:16 compute-0 nova_compute[247516]: 2026-01-21 23:54:16.022 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:54:16 compute-0 podman[258374]: 2026-01-21 23:54:16.328095167 +0000 UTC m=+0.051740518 container create 0e6caf643bab03a07f5fee04fd3497b95ece489bbe9759c0e716f06a402cba2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:54:16 compute-0 systemd[1]: Started libpod-conmon-0e6caf643bab03a07f5fee04fd3497b95ece489bbe9759c0e716f06a402cba2e.scope.
Jan 21 23:54:16 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:54:16 compute-0 podman[258374]: 2026-01-21 23:54:16.310479373 +0000 UTC m=+0.034124734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:54:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:16.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:16 compute-0 podman[258374]: 2026-01-21 23:54:16.414474504 +0000 UTC m=+0.138119935 container init 0e6caf643bab03a07f5fee04fd3497b95ece489bbe9759c0e716f06a402cba2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nightingale, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:54:16 compute-0 podman[258374]: 2026-01-21 23:54:16.425312618 +0000 UTC m=+0.148957969 container start 0e6caf643bab03a07f5fee04fd3497b95ece489bbe9759c0e716f06a402cba2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:54:16 compute-0 podman[258374]: 2026-01-21 23:54:16.430736995 +0000 UTC m=+0.154382456 container attach 0e6caf643bab03a07f5fee04fd3497b95ece489bbe9759c0e716f06a402cba2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 23:54:16 compute-0 systemd[1]: libpod-0e6caf643bab03a07f5fee04fd3497b95ece489bbe9759c0e716f06a402cba2e.scope: Deactivated successfully.
Jan 21 23:54:16 compute-0 objective_nightingale[258391]: 167 167
Jan 21 23:54:16 compute-0 podman[258374]: 2026-01-21 23:54:16.433069477 +0000 UTC m=+0.156714828 container died 0e6caf643bab03a07f5fee04fd3497b95ece489bbe9759c0e716f06a402cba2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 23:54:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:54:16 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089986392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:54:16 compute-0 nova_compute[247516]: 2026-01-21 23:54:16.454 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:54:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-8beaf713aa7744f12c1410e582769232fe7713fa9248400110ad5afe44c02860-merged.mount: Deactivated successfully.
Jan 21 23:54:16 compute-0 podman[258374]: 2026-01-21 23:54:16.483068211 +0000 UTC m=+0.206713562 container remove 0e6caf643bab03a07f5fee04fd3497b95ece489bbe9759c0e716f06a402cba2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nightingale, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:54:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:16 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2089986392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:54:16 compute-0 systemd[1]: libpod-conmon-0e6caf643bab03a07f5fee04fd3497b95ece489bbe9759c0e716f06a402cba2e.scope: Deactivated successfully.
Jan 21 23:54:16 compute-0 nova_compute[247516]: 2026-01-21 23:54:16.636 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 23:54:16 compute-0 nova_compute[247516]: 2026-01-21 23:54:16.638 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5128MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 23:54:16 compute-0 nova_compute[247516]: 2026-01-21 23:54:16.639 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:54:16 compute-0 nova_compute[247516]: 2026-01-21 23:54:16.639 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:54:16 compute-0 podman[258416]: 2026-01-21 23:54:16.682757615 +0000 UTC m=+0.049779267 container create 74b0c68af2233a2f93d084e03cfdfde71b4dcc1f6413ff905bf5f1e47cd6201f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_benz, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 23:54:16 compute-0 systemd[1]: Started libpod-conmon-74b0c68af2233a2f93d084e03cfdfde71b4dcc1f6413ff905bf5f1e47cd6201f.scope.
Jan 21 23:54:16 compute-0 nova_compute[247516]: 2026-01-21 23:54:16.717 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 23:54:16 compute-0 nova_compute[247516]: 2026-01-21 23:54:16.718 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 23:54:16 compute-0 nova_compute[247516]: 2026-01-21 23:54:16.739 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:54:16 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1253078b6a745270bd5f44b99c2aa05c70d7ad0672d92bb6ed18902f258def7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1253078b6a745270bd5f44b99c2aa05c70d7ad0672d92bb6ed18902f258def7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1253078b6a745270bd5f44b99c2aa05c70d7ad0672d92bb6ed18902f258def7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1253078b6a745270bd5f44b99c2aa05c70d7ad0672d92bb6ed18902f258def7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:54:16 compute-0 podman[258416]: 2026-01-21 23:54:16.662533891 +0000 UTC m=+0.029555583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:54:16 compute-0 podman[258416]: 2026-01-21 23:54:16.766998176 +0000 UTC m=+0.134019918 container init 74b0c68af2233a2f93d084e03cfdfde71b4dcc1f6413ff905bf5f1e47cd6201f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_benz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 21 23:54:16 compute-0 podman[258416]: 2026-01-21 23:54:16.775977883 +0000 UTC m=+0.142999575 container start 74b0c68af2233a2f93d084e03cfdfde71b4dcc1f6413ff905bf5f1e47cd6201f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 23:54:16 compute-0 podman[258416]: 2026-01-21 23:54:16.780044848 +0000 UTC m=+0.147066560 container attach 74b0c68af2233a2f93d084e03cfdfde71b4dcc1f6413ff905bf5f1e47cd6201f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_benz, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:54:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:17.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:54:17 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/644458515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:54:17 compute-0 nova_compute[247516]: 2026-01-21 23:54:17.259 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:54:17 compute-0 nova_compute[247516]: 2026-01-21 23:54:17.267 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 23:54:17 compute-0 nova_compute[247516]: 2026-01-21 23:54:17.294 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 23:54:17 compute-0 nova_compute[247516]: 2026-01-21 23:54:17.296 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 23:54:17 compute-0 nova_compute[247516]: 2026-01-21 23:54:17.296 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:54:17 compute-0 ceph-mon[74318]: pgmap v1076: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:17 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/644458515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:54:17 compute-0 distracted_benz[258434]: {
Jan 21 23:54:17 compute-0 distracted_benz[258434]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:54:17 compute-0 distracted_benz[258434]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:54:17 compute-0 distracted_benz[258434]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:54:17 compute-0 distracted_benz[258434]:         "osd_id": 1,
Jan 21 23:54:17 compute-0 distracted_benz[258434]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:54:17 compute-0 distracted_benz[258434]:         "type": "bluestore"
Jan 21 23:54:17 compute-0 distracted_benz[258434]:     }
Jan 21 23:54:17 compute-0 distracted_benz[258434]: }
Jan 21 23:54:17 compute-0 systemd[1]: libpod-74b0c68af2233a2f93d084e03cfdfde71b4dcc1f6413ff905bf5f1e47cd6201f.scope: Deactivated successfully.
Jan 21 23:54:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:54:17 compute-0 podman[258478]: 2026-01-21 23:54:17.813033097 +0000 UTC m=+0.042915116 container died 74b0c68af2233a2f93d084e03cfdfde71b4dcc1f6413ff905bf5f1e47cd6201f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:54:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1253078b6a745270bd5f44b99c2aa05c70d7ad0672d92bb6ed18902f258def7c-merged.mount: Deactivated successfully.
Jan 21 23:54:17 compute-0 podman[258478]: 2026-01-21 23:54:17.884396561 +0000 UTC m=+0.114278520 container remove 74b0c68af2233a2f93d084e03cfdfde71b4dcc1f6413ff905bf5f1e47cd6201f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_benz, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 23:54:17 compute-0 systemd[1]: libpod-conmon-74b0c68af2233a2f93d084e03cfdfde71b4dcc1f6413ff905bf5f1e47cd6201f.scope: Deactivated successfully.
Jan 21 23:54:17 compute-0 sudo[258289]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:54:17 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:54:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:54:17 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:54:17 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev e6d48554-9d92-4c31-a870-8b6b9979149c does not exist
Jan 21 23:54:17 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 90a59850-1e44-4147-a98d-e314babc2347 does not exist
Jan 21 23:54:17 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev f802d046-0877-4380-84e0-96caeb97c262 does not exist
Jan 21 23:54:18 compute-0 sudo[258492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:54:18 compute-0 sudo[258492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:18 compute-0 sudo[258492]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:18 compute-0 sudo[258517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:54:18 compute-0 sudo[258517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:18 compute-0 sudo[258517]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:18.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:18 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:54:18 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:54:18 compute-0 ceph-mon[74318]: pgmap v1077: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:19.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:20.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:21.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:21 compute-0 ceph-mon[74318]: pgmap v1078: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:22.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:54:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:54:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:23.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:54:23 compute-0 ceph-mon[74318]: pgmap v1079: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:24.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:24 compute-0 ceph-mon[74318]: pgmap v1080: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:25.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/807613293' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:54:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/807613293' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:54:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:26.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:26 compute-0 ceph-mon[74318]: pgmap v1081: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:27.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:54:28 compute-0 sudo[258547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:54:28 compute-0 sudo[258547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:28 compute-0 sudo[258547]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:28 compute-0 sudo[258572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:54:28 compute-0 sudo[258572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:28 compute-0 sudo[258572]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:28.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.013000406s ======
Jan 21 23:54:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:29.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.013000406s
Jan 21 23:54:29 compute-0 ceph-mon[74318]: pgmap v1082: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:54:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:30.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:54:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:30 compute-0 ceph-mon[74318]: pgmap v1083: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:31.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:32 compute-0 podman[258599]: 2026-01-21 23:54:32.079209325 +0000 UTC m=+0.186284822 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 21 23:54:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:32.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:54:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:33.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:33 compute-0 ceph-mon[74318]: pgmap v1084: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:54:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:34.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:54:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:35.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:35 compute-0 ceph-mon[74318]: pgmap v1085: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:36.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:36 compute-0 ceph-mon[74318]: pgmap v1086: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:37.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:54:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:38.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:38 compute-0 podman[258631]: 2026-01-21 23:54:38.993051437 +0000 UTC m=+0.100670228 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 21 23:54:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:39.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:54:39
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'images', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'backups']
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:54:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:54:39 compute-0 ceph-mon[74318]: pgmap v1087: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:40.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:40 compute-0 ceph-mon[74318]: pgmap v1088: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:41.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:42.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:54:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:43.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:43 compute-0 ceph-mon[74318]: pgmap v1089: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:44.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:45.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:45 compute-0 ceph-mon[74318]: pgmap v1090: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:46.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:46 compute-0 ceph-mon[74318]: pgmap v1091: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:47.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:54:48 compute-0 sudo[258655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:54:48 compute-0 sudo[258655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:48 compute-0 sudo[258655]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:48 compute-0 sudo[258680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:54:48 compute-0 sudo[258680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:54:48 compute-0 sudo[258680]: pam_unix(sudo:session): session closed for user root
Jan 21 23:54:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:48.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:54:48.752 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:54:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:54:48.753 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:54:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:54:48.754 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:54:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:49.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:49 compute-0 ceph-mon[74318]: pgmap v1092: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:54:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:50.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 85 B/s wr, 0 op/s
Jan 21 23:54:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:51.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:51 compute-0 ceph-mon[74318]: pgmap v1093: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 85 B/s wr, 0 op/s
Jan 21 23:54:51 compute-0 ceph-osd[84656]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 21 23:54:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:52.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 255 B/s wr, 2 op/s
Jan 21 23:54:52 compute-0 ceph-mon[74318]: pgmap v1094: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 255 B/s wr, 2 op/s
Jan 21 23:54:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:54:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:53.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:54:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:54.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 255 B/s wr, 2 op/s
Jan 21 23:54:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:55.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:55 compute-0 ceph-mon[74318]: pgmap v1095: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 255 B/s wr, 2 op/s
Jan 21 23:54:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:54:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:56.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:54:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 21 23:54:56 compute-0 ceph-mon[74318]: pgmap v1096: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 21 23:54:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:57.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:54:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:54:58.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 21 23:54:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:54:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:54:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:54:59.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:54:59 compute-0 ceph-mon[74318]: pgmap v1097: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 21 23:55:00 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:55:00.293 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 23:55:00 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:55:00.293 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 23:55:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:00.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 21 23:55:00 compute-0 ceph-mon[74318]: pgmap v1098: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 21 23:55:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:01.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 21 23:55:01 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/949304577' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:55:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 21 23:55:01 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/949304577' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:55:01 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/949304577' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:55:01 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/949304577' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:55:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:02.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 5.7 KiB/s rd, 938 B/s wr, 9 op/s
Jan 21 23:55:02 compute-0 ceph-mon[74318]: pgmap v1099: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 5.7 KiB/s rd, 938 B/s wr, 9 op/s
Jan 21 23:55:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:55:02 compute-0 podman[258712]: 2026-01-21 23:55:02.968519254 +0000 UTC m=+0.084196820 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 23:55:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:55:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:03.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:55:03 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/4156747364' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:55:03 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/4156747364' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:55:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:04.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 5.1 KiB/s rd, 767 B/s wr, 7 op/s
Jan 21 23:55:04 compute-0 ceph-mon[74318]: pgmap v1100: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 5.1 KiB/s rd, 767 B/s wr, 7 op/s
Jan 21 23:55:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:05.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:05 compute-0 nova_compute[247516]: 2026-01-21 23:55:05.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:55:05 compute-0 nova_compute[247516]: 2026-01-21 23:55:05.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 21 23:55:06 compute-0 nova_compute[247516]: 2026-01-21 23:55:06.020 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:55:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:55:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:06.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:55:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 21 23:55:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:07.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:07 compute-0 ceph-mon[74318]: pgmap v1101: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 21 23:55:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:55:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:08.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:08 compute-0 sudo[258741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:08 compute-0 sudo[258741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:08 compute-0 sudo[258741]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 21 op/s
Jan 21 23:55:08 compute-0 sudo[258766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:08 compute-0 sudo[258766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:08 compute-0 sudo[258766]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:08 compute-0 ceph-mon[74318]: pgmap v1102: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 21 op/s
Jan 21 23:55:09 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 23:55:09 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 8388 writes, 30K keys, 8388 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8388 writes, 2056 syncs, 4.08 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1911 writes, 4450 keys, 1911 commit groups, 1.0 writes per commit group, ingest: 2.11 MB, 0.00 MB/s
                                           Interval WAL: 1911 writes, 846 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 23:55:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:55:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:55:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:55:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:55:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:55:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:55:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:09.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:09 compute-0 podman[258792]: 2026-01-21 23:55:09.931535645 +0000 UTC m=+0.051647306 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 23:55:10 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:55:10.295 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 23:55:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:10.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 21 op/s
Jan 21 23:55:11 compute-0 nova_compute[247516]: 2026-01-21 23:55:11.030 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:55:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:11.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:11 compute-0 ceph-mon[74318]: pgmap v1103: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 21 op/s
Jan 21 23:55:11 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1927256295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:55:11 compute-0 nova_compute[247516]: 2026-01-21 23:55:11.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:55:11 compute-0 nova_compute[247516]: 2026-01-21 23:55:11.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 23:55:11 compute-0 nova_compute[247516]: 2026-01-21 23:55:11.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 23:55:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:55:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:12.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:55:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 21 op/s
Jan 21 23:55:12 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2376910159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:55:12 compute-0 ceph-mon[74318]: pgmap v1104: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 511 B/s wr, 21 op/s
Jan 21 23:55:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:55:12 compute-0 nova_compute[247516]: 2026-01-21 23:55:12.804 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 23:55:12 compute-0 nova_compute[247516]: 2026-01-21 23:55:12.804 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:55:12 compute-0 nova_compute[247516]: 2026-01-21 23:55:12.804 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 23:55:12 compute-0 nova_compute[247516]: 2026-01-21 23:55:12.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:55:12 compute-0 nova_compute[247516]: 2026-01-21 23:55:12.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:55:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:13.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:13 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/618786659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:55:13 compute-0 nova_compute[247516]: 2026-01-21 23:55:13.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:55:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:55:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:14.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:55:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 511 B/s wr, 20 op/s
Jan 21 23:55:14 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/4049807250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:55:14 compute-0 ceph-mon[74318]: pgmap v1105: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 511 B/s wr, 20 op/s
Jan 21 23:55:14 compute-0 nova_compute[247516]: 2026-01-21 23:55:14.987 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:55:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:15.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:16.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 511 B/s wr, 20 op/s
Jan 21 23:55:16 compute-0 nova_compute[247516]: 2026-01-21 23:55:16.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:55:16 compute-0 nova_compute[247516]: 2026-01-21 23:55:16.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:55:16 compute-0 nova_compute[247516]: 2026-01-21 23:55:16.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:55:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:55:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:17.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:55:17 compute-0 ceph-mon[74318]: pgmap v1106: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 511 B/s wr, 20 op/s
Jan 21 23:55:17 compute-0 nova_compute[247516]: 2026-01-21 23:55:17.746 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:55:17 compute-0 nova_compute[247516]: 2026-01-21 23:55:17.746 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:55:17 compute-0 nova_compute[247516]: 2026-01-21 23:55:17.747 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:55:17 compute-0 nova_compute[247516]: 2026-01-21 23:55:17.747 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 23:55:17 compute-0 nova_compute[247516]: 2026-01-21 23:55:17.747 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:55:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:55:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:55:18 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/424461906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:55:18 compute-0 nova_compute[247516]: 2026-01-21 23:55:18.270 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:55:18 compute-0 nova_compute[247516]: 2026-01-21 23:55:18.440 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 23:55:18 compute-0 nova_compute[247516]: 2026-01-21 23:55:18.442 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5208MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 23:55:18 compute-0 nova_compute[247516]: 2026-01-21 23:55:18.442 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:55:18 compute-0 nova_compute[247516]: 2026-01-21 23:55:18.442 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:55:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:55:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:18.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:55:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Jan 21 23:55:18 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/424461906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:55:18 compute-0 ceph-mon[74318]: pgmap v1107: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Jan 21 23:55:18 compute-0 sudo[258837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:18 compute-0 sudo[258837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:18 compute-0 sudo[258837]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:18 compute-0 sudo[258862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:55:18 compute-0 sudo[258862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:18 compute-0 sudo[258862]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:18 compute-0 sudo[258887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:18 compute-0 sudo[258887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:18 compute-0 sudo[258887]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:18 compute-0 sudo[258912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 21 23:55:18 compute-0 sudo[258912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:55:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:55:19 compute-0 sudo[258912]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:55:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:55:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:19 compute-0 sudo[258957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:19 compute-0 sudo[258957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:19 compute-0 sudo[258957]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:19.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:19 compute-0 sudo[258982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:55:19 compute-0 sudo[258982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:19 compute-0 sudo[258982]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:19 compute-0 sudo[259008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:19 compute-0 sudo[259008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:19 compute-0 sudo[259008]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:19 compute-0 sudo[259033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:55:19 compute-0 sudo[259033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:19 compute-0 ceph-mgr[74614]: [devicehealth INFO root] Check health
Jan 21 23:55:20 compute-0 sudo[259033]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:55:20 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:20 compute-0 nova_compute[247516]: 2026-01-21 23:55:20.210 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 23:55:20 compute-0 nova_compute[247516]: 2026-01-21 23:55:20.211 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 23:55:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:55:20 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:20 compute-0 nova_compute[247516]: 2026-01-21 23:55:20.247 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:55:20 compute-0 sudo[259091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:20 compute-0 sudo[259091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:20 compute-0 sudo[259091]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:20 compute-0 sudo[259117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:55:20 compute-0 sudo[259117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:20 compute-0 sudo[259117]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:20.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:20 compute-0 sudo[259161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:20 compute-0 sudo[259161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:20 compute-0 sudo[259161]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:55:20 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:55:20 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:20 compute-0 sudo[259186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- inventory --format=json-pretty --filter-for-batch
Jan 21 23:55:20 compute-0 sudo[259186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:55:20 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4071234107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:55:20 compute-0 nova_compute[247516]: 2026-01-21 23:55:20.770 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:55:20 compute-0 nova_compute[247516]: 2026-01-21 23:55:20.778 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 23:55:20 compute-0 nova_compute[247516]: 2026-01-21 23:55:20.799 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 23:55:20 compute-0 nova_compute[247516]: 2026-01-21 23:55:20.801 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 23:55:20 compute-0 nova_compute[247516]: 2026-01-21 23:55:20.801 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:55:20 compute-0 nova_compute[247516]: 2026-01-21 23:55:20.802 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:55:20 compute-0 nova_compute[247516]: 2026-01-21 23:55:20.802 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 21 23:55:20 compute-0 nova_compute[247516]: 2026-01-21 23:55:20.830 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 21 23:55:21 compute-0 podman[259252]: 2026-01-21 23:55:21.021116211 +0000 UTC m=+0.058263240 container create 5c7cff747bb8caf2307d127a1fee0c5bb08c26f49cb01e2385ca62d724b4b572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 21 23:55:21 compute-0 systemd[1]: Started libpod-conmon-5c7cff747bb8caf2307d127a1fee0c5bb08c26f49cb01e2385ca62d724b4b572.scope.
Jan 21 23:55:21 compute-0 podman[259252]: 2026-01-21 23:55:20.993101276 +0000 UTC m=+0.030248345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:55:21 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:55:21 compute-0 podman[259252]: 2026-01-21 23:55:21.139369741 +0000 UTC m=+0.176516871 container init 5c7cff747bb8caf2307d127a1fee0c5bb08c26f49cb01e2385ca62d724b4b572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:55:21 compute-0 podman[259252]: 2026-01-21 23:55:21.152441845 +0000 UTC m=+0.189588894 container start 5c7cff747bb8caf2307d127a1fee0c5bb08c26f49cb01e2385ca62d724b4b572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 21 23:55:21 compute-0 podman[259252]: 2026-01-21 23:55:21.156589403 +0000 UTC m=+0.193736452 container attach 5c7cff747bb8caf2307d127a1fee0c5bb08c26f49cb01e2385ca62d724b4b572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:55:21 compute-0 silly_williams[259268]: 167 167
Jan 21 23:55:21 compute-0 systemd[1]: libpod-5c7cff747bb8caf2307d127a1fee0c5bb08c26f49cb01e2385ca62d724b4b572.scope: Deactivated successfully.
Jan 21 23:55:21 compute-0 podman[259252]: 2026-01-21 23:55:21.163256829 +0000 UTC m=+0.200403888 container died 5c7cff747bb8caf2307d127a1fee0c5bb08c26f49cb01e2385ca62d724b4b572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 21 23:55:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed32a50530b89c75f09102f23fbfa560546b67a97ab1c513b37aa472409beb80-merged.mount: Deactivated successfully.
Jan 21 23:55:21 compute-0 podman[259252]: 2026-01-21 23:55:21.216708929 +0000 UTC m=+0.253855938 container remove 5c7cff747bb8caf2307d127a1fee0c5bb08c26f49cb01e2385ca62d724b4b572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 21 23:55:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:21 compute-0 ceph-mon[74318]: pgmap v1108: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/4071234107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:55:21 compute-0 systemd[1]: libpod-conmon-5c7cff747bb8caf2307d127a1fee0c5bb08c26f49cb01e2385ca62d724b4b572.scope: Deactivated successfully.
Jan 21 23:55:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:21.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:21 compute-0 podman[259292]: 2026-01-21 23:55:21.418731115 +0000 UTC m=+0.052575394 container create 70cb719b821303ea44f9e0bf1aad9729183416767a11faf75c1606e29be59945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bell, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:55:21 compute-0 systemd[1]: Started libpod-conmon-70cb719b821303ea44f9e0bf1aad9729183416767a11faf75c1606e29be59945.scope.
Jan 21 23:55:21 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:55:21 compute-0 podman[259292]: 2026-01-21 23:55:21.40007598 +0000 UTC m=+0.033920279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77eb9a658c2d140f9c6edce548a86893db7abe21b95c5ab62fe4e6b0b84e095f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77eb9a658c2d140f9c6edce548a86893db7abe21b95c5ab62fe4e6b0b84e095f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77eb9a658c2d140f9c6edce548a86893db7abe21b95c5ab62fe4e6b0b84e095f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77eb9a658c2d140f9c6edce548a86893db7abe21b95c5ab62fe4e6b0b84e095f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:55:21 compute-0 podman[259292]: 2026-01-21 23:55:21.513151881 +0000 UTC m=+0.146996180 container init 70cb719b821303ea44f9e0bf1aad9729183416767a11faf75c1606e29be59945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:55:21 compute-0 podman[259292]: 2026-01-21 23:55:21.52964993 +0000 UTC m=+0.163494209 container start 70cb719b821303ea44f9e0bf1aad9729183416767a11faf75c1606e29be59945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:55:21 compute-0 podman[259292]: 2026-01-21 23:55:21.533498468 +0000 UTC m=+0.167342747 container attach 70cb719b821303ea44f9e0bf1aad9729183416767a11faf75c1606e29be59945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:55:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:22.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:55:22 compute-0 practical_bell[259310]: [
Jan 21 23:55:22 compute-0 practical_bell[259310]:     {
Jan 21 23:55:22 compute-0 practical_bell[259310]:         "available": false,
Jan 21 23:55:22 compute-0 practical_bell[259310]:         "ceph_device": false,
Jan 21 23:55:22 compute-0 practical_bell[259310]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 21 23:55:22 compute-0 practical_bell[259310]:         "lsm_data": {},
Jan 21 23:55:22 compute-0 practical_bell[259310]:         "lvs": [],
Jan 21 23:55:22 compute-0 practical_bell[259310]:         "path": "/dev/sr0",
Jan 21 23:55:22 compute-0 practical_bell[259310]:         "rejected_reasons": [
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "Has a FileSystem",
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "Insufficient space (<5GB)"
Jan 21 23:55:22 compute-0 practical_bell[259310]:         ],
Jan 21 23:55:22 compute-0 practical_bell[259310]:         "sys_api": {
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "actuators": null,
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "device_nodes": "sr0",
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "devname": "sr0",
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "human_readable_size": "482.00 KB",
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "id_bus": "ata",
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "model": "QEMU DVD-ROM",
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "nr_requests": "2",
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "parent": "/dev/sr0",
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "partitions": {},
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "path": "/dev/sr0",
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "removable": "1",
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "rev": "2.5+",
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "ro": "0",
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "rotational": "1",
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "sas_address": "",
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "sas_device_handle": "",
Jan 21 23:55:22 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "scheduler_mode": "mq-deadline",
Jan 21 23:55:22 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "sectors": 0,
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "sectorsize": "2048",
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "size": 493568.0,
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "support_discard": "2048",
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "type": "disk",
Jan 21 23:55:22 compute-0 practical_bell[259310]:             "vendor": "QEMU"
Jan 21 23:55:22 compute-0 practical_bell[259310]:         }
Jan 21 23:55:22 compute-0 practical_bell[259310]:     }
Jan 21 23:55:22 compute-0 practical_bell[259310]: ]
Jan 21 23:55:22 compute-0 systemd[1]: libpod-70cb719b821303ea44f9e0bf1aad9729183416767a11faf75c1606e29be59945.scope: Deactivated successfully.
Jan 21 23:55:22 compute-0 systemd[1]: libpod-70cb719b821303ea44f9e0bf1aad9729183416767a11faf75c1606e29be59945.scope: Consumed 1.335s CPU time.
Jan 21 23:55:22 compute-0 podman[259292]: 2026-01-21 23:55:22.831774147 +0000 UTC m=+1.465618426 container died 70cb719b821303ea44f9e0bf1aad9729183416767a11faf75c1606e29be59945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:55:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-77eb9a658c2d140f9c6edce548a86893db7abe21b95c5ab62fe4e6b0b84e095f-merged.mount: Deactivated successfully.
Jan 21 23:55:22 compute-0 podman[259292]: 2026-01-21 23:55:22.888485387 +0000 UTC m=+1.522329676 container remove 70cb719b821303ea44f9e0bf1aad9729183416767a11faf75c1606e29be59945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 21 23:55:22 compute-0 systemd[1]: libpod-conmon-70cb719b821303ea44f9e0bf1aad9729183416767a11faf75c1606e29be59945.scope: Deactivated successfully.
Jan 21 23:55:22 compute-0 sudo[259186]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:55:22 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:55:22 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 21 23:55:23 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 21 23:55:23 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:55:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:55:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:55:23 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:55:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:55:23 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:23 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 12216171-6869-49ae-9f18-9892aac029fe does not exist
Jan 21 23:55:23 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 215927dc-3479-415f-8994-0372983e86e8 does not exist
Jan 21 23:55:23 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev c974093c-f114-4f89-a854-b47c66937d4c does not exist
Jan 21 23:55:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:55:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:55:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:55:23 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:55:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:55:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:55:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:23.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:23 compute-0 sudo[260488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:23 compute-0 sudo[260488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:23 compute-0 sudo[260488]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:23 compute-0 sudo[260514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:55:23 compute-0 sudo[260514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:23 compute-0 sudo[260514]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:23 compute-0 sudo[260539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:23 compute-0 sudo[260539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:23 compute-0 sudo[260539]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:23 compute-0 sudo[260564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:55:23 compute-0 sudo[260564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:23 compute-0 ceph-mon[74318]: pgmap v1109: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:55:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:55:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:55:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:55:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:55:23 compute-0 podman[260629]: 2026-01-21 23:55:23.95562809 +0000 UTC m=+0.061874561 container create e6a7a67160eb61c0a4c70ed6026a99ac244eda937a8659b141444dc8aed9c766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 23:55:24 compute-0 systemd[1]: Started libpod-conmon-e6a7a67160eb61c0a4c70ed6026a99ac244eda937a8659b141444dc8aed9c766.scope.
Jan 21 23:55:24 compute-0 podman[260629]: 2026-01-21 23:55:23.925270943 +0000 UTC m=+0.031517464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:55:24 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:55:24 compute-0 podman[260629]: 2026-01-21 23:55:24.055715049 +0000 UTC m=+0.161961560 container init e6a7a67160eb61c0a4c70ed6026a99ac244eda937a8659b141444dc8aed9c766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 21 23:55:24 compute-0 podman[260629]: 2026-01-21 23:55:24.067032289 +0000 UTC m=+0.173278730 container start e6a7a67160eb61c0a4c70ed6026a99ac244eda937a8659b141444dc8aed9c766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:55:24 compute-0 podman[260629]: 2026-01-21 23:55:24.070750574 +0000 UTC m=+0.176997045 container attach e6a7a67160eb61c0a4c70ed6026a99ac244eda937a8659b141444dc8aed9c766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:55:24 compute-0 agitated_williamson[260645]: 167 167
Jan 21 23:55:24 compute-0 systemd[1]: libpod-e6a7a67160eb61c0a4c70ed6026a99ac244eda937a8659b141444dc8aed9c766.scope: Deactivated successfully.
Jan 21 23:55:24 compute-0 podman[260629]: 2026-01-21 23:55:24.073817499 +0000 UTC m=+0.180063980 container died e6a7a67160eb61c0a4c70ed6026a99ac244eda937a8659b141444dc8aed9c766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 21 23:55:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9b51da60626b56d8e695f45e5b433f664e970ed98a672eac8fae81da864ec3c-merged.mount: Deactivated successfully.
Jan 21 23:55:24 compute-0 podman[260629]: 2026-01-21 23:55:24.128898469 +0000 UTC m=+0.235144940 container remove e6a7a67160eb61c0a4c70ed6026a99ac244eda937a8659b141444dc8aed9c766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:55:24 compute-0 systemd[1]: libpod-conmon-e6a7a67160eb61c0a4c70ed6026a99ac244eda937a8659b141444dc8aed9c766.scope: Deactivated successfully.
Jan 21 23:55:24 compute-0 podman[260668]: 2026-01-21 23:55:24.34046238 +0000 UTC m=+0.054841484 container create cdb84461d10e7404834b94b684e4408e6ad63d3069f8eb23a7c77f55839d0d43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cray, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 23:55:24 compute-0 systemd[1]: Started libpod-conmon-cdb84461d10e7404834b94b684e4408e6ad63d3069f8eb23a7c77f55839d0d43.scope.
Jan 21 23:55:24 compute-0 podman[260668]: 2026-01-21 23:55:24.316344525 +0000 UTC m=+0.030723679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:55:24 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d409a52a7d8429f739973e4c097c1a90774cb2f72b46129f647fa534375ded/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d409a52a7d8429f739973e4c097c1a90774cb2f72b46129f647fa534375ded/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d409a52a7d8429f739973e4c097c1a90774cb2f72b46129f647fa534375ded/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d409a52a7d8429f739973e4c097c1a90774cb2f72b46129f647fa534375ded/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d409a52a7d8429f739973e4c097c1a90774cb2f72b46129f647fa534375ded/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:55:24 compute-0 podman[260668]: 2026-01-21 23:55:24.45124824 +0000 UTC m=+0.165627354 container init cdb84461d10e7404834b94b684e4408e6ad63d3069f8eb23a7c77f55839d0d43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 23:55:24 compute-0 podman[260668]: 2026-01-21 23:55:24.475397715 +0000 UTC m=+0.189776819 container start cdb84461d10e7404834b94b684e4408e6ad63d3069f8eb23a7c77f55839d0d43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:55:24 compute-0 podman[260668]: 2026-01-21 23:55:24.480250895 +0000 UTC m=+0.194630009 container attach cdb84461d10e7404834b94b684e4408e6ad63d3069f8eb23a7c77f55839d0d43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:55:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:55:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:24.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:55:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:24 compute-0 ceph-mon[74318]: pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:25.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:25 compute-0 sweet_cray[260684]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:55:25 compute-0 sweet_cray[260684]: --> relative data size: 1.0
Jan 21 23:55:25 compute-0 sweet_cray[260684]: --> All data devices are unavailable
Jan 21 23:55:25 compute-0 systemd[1]: libpod-cdb84461d10e7404834b94b684e4408e6ad63d3069f8eb23a7c77f55839d0d43.scope: Deactivated successfully.
Jan 21 23:55:25 compute-0 podman[260668]: 2026-01-21 23:55:25.390313529 +0000 UTC m=+1.104692653 container died cdb84461d10e7404834b94b684e4408e6ad63d3069f8eb23a7c77f55839d0d43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:55:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5d409a52a7d8429f739973e4c097c1a90774cb2f72b46129f647fa534375ded-merged.mount: Deactivated successfully.
Jan 21 23:55:25 compute-0 podman[260668]: 2026-01-21 23:55:25.445381569 +0000 UTC m=+1.159760673 container remove cdb84461d10e7404834b94b684e4408e6ad63d3069f8eb23a7c77f55839d0d43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:55:25 compute-0 systemd[1]: libpod-conmon-cdb84461d10e7404834b94b684e4408e6ad63d3069f8eb23a7c77f55839d0d43.scope: Deactivated successfully.
Jan 21 23:55:25 compute-0 sudo[260564]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:25 compute-0 sudo[260712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:25 compute-0 sudo[260712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:25 compute-0 sudo[260712]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:25 compute-0 sudo[260737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:55:25 compute-0 sudo[260737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:25 compute-0 sudo[260737]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:25 compute-0 sudo[260762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:25 compute-0 sudo[260762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:25 compute-0 sudo[260762]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2531704774' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:55:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2531704774' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:55:25 compute-0 sudo[260787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:55:25 compute-0 sudo[260787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:26 compute-0 podman[260850]: 2026-01-21 23:55:26.168186712 +0000 UTC m=+0.048389845 container create f1f730a70bc5c3a238e98c75574085cfc705c565a911d4c4d2f8409fd652995a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 23:55:26 compute-0 systemd[1]: Started libpod-conmon-f1f730a70bc5c3a238e98c75574085cfc705c565a911d4c4d2f8409fd652995a.scope.
Jan 21 23:55:26 compute-0 podman[260850]: 2026-01-21 23:55:26.145808491 +0000 UTC m=+0.026011704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:55:26 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:55:26 compute-0 podman[260850]: 2026-01-21 23:55:26.265493796 +0000 UTC m=+0.145696929 container init f1f730a70bc5c3a238e98c75574085cfc705c565a911d4c4d2f8409fd652995a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:55:26 compute-0 podman[260850]: 2026-01-21 23:55:26.278812257 +0000 UTC m=+0.159015380 container start f1f730a70bc5c3a238e98c75574085cfc705c565a911d4c4d2f8409fd652995a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 23:55:26 compute-0 podman[260850]: 2026-01-21 23:55:26.282700847 +0000 UTC m=+0.162903980 container attach f1f730a70bc5c3a238e98c75574085cfc705c565a911d4c4d2f8409fd652995a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:55:26 compute-0 gallant_jang[260867]: 167 167
Jan 21 23:55:26 compute-0 systemd[1]: libpod-f1f730a70bc5c3a238e98c75574085cfc705c565a911d4c4d2f8409fd652995a.scope: Deactivated successfully.
Jan 21 23:55:26 compute-0 podman[260850]: 2026-01-21 23:55:26.284769451 +0000 UTC m=+0.164972604 container died f1f730a70bc5c3a238e98c75574085cfc705c565a911d4c4d2f8409fd652995a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:55:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3221693e94571f3b391dad4b518a906b13265a5ec86e235fcb8d4929cd9ea9e-merged.mount: Deactivated successfully.
Jan 21 23:55:26 compute-0 podman[260850]: 2026-01-21 23:55:26.320661199 +0000 UTC m=+0.200864322 container remove f1f730a70bc5c3a238e98c75574085cfc705c565a911d4c4d2f8409fd652995a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:55:26 compute-0 systemd[1]: libpod-conmon-f1f730a70bc5c3a238e98c75574085cfc705c565a911d4c4d2f8409fd652995a.scope: Deactivated successfully.
Jan 21 23:55:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:26.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:26 compute-0 podman[260892]: 2026-01-21 23:55:26.556434597 +0000 UTC m=+0.065459001 container create ed4d68b4a42e5933c904b593c9139292c9009c39a8244047caf6dcdd0a909437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jennings, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Jan 21 23:55:26 compute-0 systemd[1]: Started libpod-conmon-ed4d68b4a42e5933c904b593c9139292c9009c39a8244047caf6dcdd0a909437.scope.
Jan 21 23:55:26 compute-0 podman[260892]: 2026-01-21 23:55:26.53418477 +0000 UTC m=+0.043209254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:55:26 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:55:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/620bede4bcb0a6a2e8ff09b8befa61d556b8c1a5d4996554c438e045a9786ff3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:55:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/620bede4bcb0a6a2e8ff09b8befa61d556b8c1a5d4996554c438e045a9786ff3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:55:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/620bede4bcb0a6a2e8ff09b8befa61d556b8c1a5d4996554c438e045a9786ff3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:55:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/620bede4bcb0a6a2e8ff09b8befa61d556b8c1a5d4996554c438e045a9786ff3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:55:26 compute-0 podman[260892]: 2026-01-21 23:55:26.669489658 +0000 UTC m=+0.178514142 container init ed4d68b4a42e5933c904b593c9139292c9009c39a8244047caf6dcdd0a909437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 21 23:55:26 compute-0 podman[260892]: 2026-01-21 23:55:26.680108535 +0000 UTC m=+0.189132939 container start ed4d68b4a42e5933c904b593c9139292c9009c39a8244047caf6dcdd0a909437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:55:26 compute-0 podman[260892]: 2026-01-21 23:55:26.683949954 +0000 UTC m=+0.192974388 container attach ed4d68b4a42e5933c904b593c9139292c9009c39a8244047caf6dcdd0a909437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:55:26 compute-0 ceph-mon[74318]: pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:27.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:27 compute-0 elastic_jennings[260909]: {
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:     "1": [
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:         {
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:             "devices": [
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:                 "/dev/loop3"
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:             ],
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:             "lv_name": "ceph_lv0",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:             "lv_size": "7511998464",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:             "name": "ceph_lv0",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:             "tags": {
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:                 "ceph.cluster_name": "ceph",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:                 "ceph.crush_device_class": "",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:                 "ceph.encrypted": "0",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:                 "ceph.osd_id": "1",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:                 "ceph.type": "block",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:                 "ceph.vdo": "0"
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:             },
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:             "type": "block",
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:             "vg_name": "ceph_vg0"
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:         }
Jan 21 23:55:27 compute-0 elastic_jennings[260909]:     ]
Jan 21 23:55:27 compute-0 elastic_jennings[260909]: }
Jan 21 23:55:27 compute-0 systemd[1]: libpod-ed4d68b4a42e5933c904b593c9139292c9009c39a8244047caf6dcdd0a909437.scope: Deactivated successfully.
Jan 21 23:55:27 compute-0 podman[260892]: 2026-01-21 23:55:27.514021148 +0000 UTC m=+1.023045572 container died ed4d68b4a42e5933c904b593c9139292c9009c39a8244047caf6dcdd0a909437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 21 23:55:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-620bede4bcb0a6a2e8ff09b8befa61d556b8c1a5d4996554c438e045a9786ff3-merged.mount: Deactivated successfully.
Jan 21 23:55:27 compute-0 podman[260892]: 2026-01-21 23:55:27.57467511 +0000 UTC m=+1.083699524 container remove ed4d68b4a42e5933c904b593c9139292c9009c39a8244047caf6dcdd0a909437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:55:27 compute-0 systemd[1]: libpod-conmon-ed4d68b4a42e5933c904b593c9139292c9009c39a8244047caf6dcdd0a909437.scope: Deactivated successfully.
Jan 21 23:55:27 compute-0 sudo[260787]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:27 compute-0 sudo[260931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:27 compute-0 sudo[260931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:27 compute-0 sudo[260931]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:55:27 compute-0 sudo[260956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:55:27 compute-0 sudo[260956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:27 compute-0 sudo[260956]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:27 compute-0 sudo[260981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:27 compute-0 sudo[260981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:27 compute-0 sudo[260981]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:28 compute-0 sudo[261006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:55:28 compute-0 sudo[261006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:28 compute-0 podman[261073]: 2026-01-21 23:55:28.478995237 +0000 UTC m=+0.071552320 container create e123f865acc2dfbdb898af390bf79cfcfd6090e376bba84057572ba1c857401e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_heisenberg, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 23:55:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:28.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:28 compute-0 systemd[1]: Started libpod-conmon-e123f865acc2dfbdb898af390bf79cfcfd6090e376bba84057572ba1c857401e.scope.
Jan 21 23:55:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:28 compute-0 podman[261073]: 2026-01-21 23:55:28.447859316 +0000 UTC m=+0.040416489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:55:28 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:55:28 compute-0 podman[261073]: 2026-01-21 23:55:28.638660245 +0000 UTC m=+0.231217328 container init e123f865acc2dfbdb898af390bf79cfcfd6090e376bba84057572ba1c857401e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 23:55:28 compute-0 podman[261073]: 2026-01-21 23:55:28.653642428 +0000 UTC m=+0.246199511 container start e123f865acc2dfbdb898af390bf79cfcfd6090e376bba84057572ba1c857401e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 21 23:55:28 compute-0 podman[261073]: 2026-01-21 23:55:28.657220569 +0000 UTC m=+0.249777652 container attach e123f865acc2dfbdb898af390bf79cfcfd6090e376bba84057572ba1c857401e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_heisenberg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:55:28 compute-0 cranky_heisenberg[261089]: 167 167
Jan 21 23:55:28 compute-0 systemd[1]: libpod-e123f865acc2dfbdb898af390bf79cfcfd6090e376bba84057572ba1c857401e.scope: Deactivated successfully.
Jan 21 23:55:28 compute-0 conmon[261089]: conmon e123f865acc2dfbdb898 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e123f865acc2dfbdb898af390bf79cfcfd6090e376bba84057572ba1c857401e.scope/container/memory.events
Jan 21 23:55:28 compute-0 podman[261073]: 2026-01-21 23:55:28.663162493 +0000 UTC m=+0.255719576 container died e123f865acc2dfbdb898af390bf79cfcfd6090e376bba84057572ba1c857401e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_heisenberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:55:28 compute-0 ceph-mon[74318]: pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:28 compute-0 sudo[261092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:28 compute-0 sudo[261092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d91c115743a0d91bb79a81ba42f474485a61436255e5d84275d7bb30283d4033-merged.mount: Deactivated successfully.
Jan 21 23:55:28 compute-0 sudo[261092]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:28 compute-0 podman[261073]: 2026-01-21 23:55:28.7220171 +0000 UTC m=+0.314574223 container remove e123f865acc2dfbdb898af390bf79cfcfd6090e376bba84057572ba1c857401e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_heisenberg, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Jan 21 23:55:28 compute-0 systemd[1]: libpod-conmon-e123f865acc2dfbdb898af390bf79cfcfd6090e376bba84057572ba1c857401e.scope: Deactivated successfully.
Jan 21 23:55:28 compute-0 sudo[261127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:28 compute-0 sudo[261127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:28 compute-0 sudo[261127]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:28 compute-0 podman[261162]: 2026-01-21 23:55:28.97080965 +0000 UTC m=+0.069883118 container create dea1b745fd7d71264247cac1013d81a4a57eb5409e32230c75dd7fce7bab2404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swirles, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:55:29 compute-0 systemd[1]: Started libpod-conmon-dea1b745fd7d71264247cac1013d81a4a57eb5409e32230c75dd7fce7bab2404.scope.
Jan 21 23:55:29 compute-0 podman[261162]: 2026-01-21 23:55:28.946839289 +0000 UTC m=+0.045912807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:55:29 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc2d49cbe6721ae0fde59223a9b95386acbb3dac429a05830ca59c34f60e4a5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc2d49cbe6721ae0fde59223a9b95386acbb3dac429a05830ca59c34f60e4a5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc2d49cbe6721ae0fde59223a9b95386acbb3dac429a05830ca59c34f60e4a5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc2d49cbe6721ae0fde59223a9b95386acbb3dac429a05830ca59c34f60e4a5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:55:29 compute-0 podman[261162]: 2026-01-21 23:55:29.094250481 +0000 UTC m=+0.193324039 container init dea1b745fd7d71264247cac1013d81a4a57eb5409e32230c75dd7fce7bab2404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 21 23:55:29 compute-0 podman[261162]: 2026-01-21 23:55:29.106013223 +0000 UTC m=+0.205086731 container start dea1b745fd7d71264247cac1013d81a4a57eb5409e32230c75dd7fce7bab2404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swirles, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 23:55:29 compute-0 podman[261162]: 2026-01-21 23:55:29.110246634 +0000 UTC m=+0.209320142 container attach dea1b745fd7d71264247cac1013d81a4a57eb5409e32230c75dd7fce7bab2404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swirles, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 21 23:55:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:55:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:29.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:55:29 compute-0 wonderful_swirles[261178]: {
Jan 21 23:55:29 compute-0 wonderful_swirles[261178]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:55:29 compute-0 wonderful_swirles[261178]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:55:29 compute-0 wonderful_swirles[261178]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:55:29 compute-0 wonderful_swirles[261178]:         "osd_id": 1,
Jan 21 23:55:29 compute-0 wonderful_swirles[261178]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:55:29 compute-0 wonderful_swirles[261178]:         "type": "bluestore"
Jan 21 23:55:29 compute-0 wonderful_swirles[261178]:     }
Jan 21 23:55:29 compute-0 wonderful_swirles[261178]: }
Jan 21 23:55:30 compute-0 systemd[1]: libpod-dea1b745fd7d71264247cac1013d81a4a57eb5409e32230c75dd7fce7bab2404.scope: Deactivated successfully.
Jan 21 23:55:30 compute-0 podman[261162]: 2026-01-21 23:55:30.041045438 +0000 UTC m=+1.140118926 container died dea1b745fd7d71264247cac1013d81a4a57eb5409e32230c75dd7fce7bab2404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:55:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc2d49cbe6721ae0fde59223a9b95386acbb3dac429a05830ca59c34f60e4a5e-merged.mount: Deactivated successfully.
Jan 21 23:55:30 compute-0 podman[261162]: 2026-01-21 23:55:30.095265392 +0000 UTC m=+1.194338860 container remove dea1b745fd7d71264247cac1013d81a4a57eb5409e32230c75dd7fce7bab2404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swirles, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 21 23:55:30 compute-0 systemd[1]: libpod-conmon-dea1b745fd7d71264247cac1013d81a4a57eb5409e32230c75dd7fce7bab2404.scope: Deactivated successfully.
Jan 21 23:55:30 compute-0 sudo[261006]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:55:30 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:55:30 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:30 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 6047c3d4-1a98-45fb-a994-1dbab51d620f does not exist
Jan 21 23:55:30 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 0f425f98-bae4-4e55-a427-07b3eb83761a does not exist
Jan 21 23:55:30 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 29f87570-6ac7-4413-87ef-1c227bed335d does not exist
Jan 21 23:55:30 compute-0 sudo[261214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:30 compute-0 sudo[261214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:30 compute-0 sudo[261214]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:30 compute-0 sudo[261239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:55:30 compute-0 sudo[261239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:30 compute-0 sudo[261239]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:55:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:30.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:55:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:55:31 compute-0 ceph-mon[74318]: pgmap v1113: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:31.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:32.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:55:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:55:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:33.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:55:33 compute-0 ceph-mon[74318]: pgmap v1114: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:34 compute-0 podman[261266]: 2026-01-21 23:55:34.013099337 +0000 UTC m=+0.122779193 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 21 23:55:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:34.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:34 compute-0 ceph-mon[74318]: pgmap v1115: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:55:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:35.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:55:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:36.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:37.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:37 compute-0 ceph-mon[74318]: pgmap v1116: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:55:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:38.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:38 compute-0 ceph-mon[74318]: pgmap v1117: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:55:39
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'backups', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.meta', '.mgr']
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:55:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:55:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:39.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:55:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:55:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:40.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:40 compute-0 podman[261296]: 2026-01-21 23:55:40.981186357 +0000 UTC m=+0.080881408 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 21 23:55:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:55:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:41.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:55:41 compute-0 ceph-mon[74318]: pgmap v1118: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:42.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:42 compute-0 ceph-mon[74318]: pgmap v1119: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:55:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:43.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:44.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:55:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:45.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:55:45 compute-0 ceph-mon[74318]: pgmap v1120: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:55:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:46.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:55:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:46 compute-0 ceph-mon[74318]: pgmap v1121: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:55:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:55:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:47.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:55:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:55:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:48.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 21 23:55:48 compute-0 ceph-mon[74318]: pgmap v1122: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 21 23:55:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:55:48.753 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:55:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:55:48.754 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:55:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:55:48.754 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:55:48 compute-0 sudo[261321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:48 compute-0 sudo[261321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:48 compute-0 sudo[261321]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:48 compute-0 sudo[261346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:55:48 compute-0 sudo[261346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:55:48 compute-0 sudo[261346]: pam_unix(sudo:session): session closed for user root
Jan 21 23:55:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:49.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:50.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 21 23:55:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:51.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:51 compute-0 ceph-mon[74318]: pgmap v1123: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 21 23:55:52 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 21 23:55:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:52.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 8 op/s
Jan 21 23:55:52 compute-0 ceph-mon[74318]: pgmap v1124: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 8 op/s
Jan 21 23:55:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:55:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:53.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:55:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:54.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 8 op/s
Jan 21 23:55:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:55.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:55 compute-0 ceph-mon[74318]: pgmap v1125: 305 pgs: 305 active+clean; 41 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 8 op/s
Jan 21 23:55:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:56.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 82 MiB data, 258 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 30 op/s
Jan 21 23:55:56 compute-0 ceph-mon[74318]: pgmap v1126: 305 pgs: 305 active+clean; 82 MiB data, 258 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 30 op/s
Jan 21 23:55:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:57.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:55:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:55:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:55:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:55:58.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:55:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 88 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 21 23:55:58 compute-0 ceph-mon[74318]: pgmap v1127: 305 pgs: 305 active+clean; 88 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 21 23:55:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:55:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:55:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:55:59.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:56:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:00.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:56:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 88 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 50 op/s
Jan 21 23:56:00 compute-0 ceph-mon[74318]: pgmap v1128: 305 pgs: 305 active+clean; 88 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 50 op/s
Jan 21 23:56:00 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:56:00.799 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 23:56:00 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:56:00.800 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 23:56:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:56:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:01.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:56:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:56:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:02.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:56:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 88 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 21 23:56:02 compute-0 ceph-mon[74318]: pgmap v1129: 305 pgs: 305 active+clean; 88 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 21 23:56:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:56:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:56:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:03.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:56:03 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:56:03.805 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 23:56:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:04.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 88 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 21 23:56:04 compute-0 ceph-mon[74318]: pgmap v1130: 305 pgs: 305 active+clean; 88 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 21 23:56:05 compute-0 podman[261379]: 2026-01-21 23:56:05.069854584 +0000 UTC m=+0.177370885 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 21 23:56:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:05.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:06.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 119 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.2 MiB/s wr, 74 op/s
Jan 21 23:56:06 compute-0 ceph-mon[74318]: pgmap v1131: 305 pgs: 305 active+clean; 119 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.2 MiB/s wr, 74 op/s
Jan 21 23:56:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:07.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:56:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:08.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 55 op/s
Jan 21 23:56:08 compute-0 ceph-mon[74318]: pgmap v1132: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 55 op/s
Jan 21 23:56:09 compute-0 sudo[261407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:56:09 compute-0 sudo[261407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:09 compute-0 sudo[261407]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:09 compute-0 sudo[261432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:56:09 compute-0 sudo[261432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:09 compute-0 sudo[261432]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:56:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:56:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:56:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:56:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:56:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:56:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:09.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:10 compute-0 nova_compute[247516]: 2026-01-21 23:56:10.445 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:56:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:10.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 21 23:56:10 compute-0 ceph-mon[74318]: pgmap v1133: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 21 23:56:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:11.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:11 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3148385546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:56:11 compute-0 podman[261459]: 2026-01-21 23:56:11.964915831 +0000 UTC m=+0.077579086 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 21 23:56:11 compute-0 nova_compute[247516]: 2026-01-21 23:56:11.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:56:11 compute-0 nova_compute[247516]: 2026-01-21 23:56:11.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 23:56:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 21 23:56:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:12.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:12 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3431491877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:56:12 compute-0 ceph-mon[74318]: pgmap v1134: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 21 23:56:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:56:12 compute-0 nova_compute[247516]: 2026-01-21 23:56:12.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:56:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:13.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:13 compute-0 nova_compute[247516]: 2026-01-21 23:56:13.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:56:13 compute-0 nova_compute[247516]: 2026-01-21 23:56:13.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 23:56:13 compute-0 nova_compute[247516]: 2026-01-21 23:56:13.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 23:56:14 compute-0 nova_compute[247516]: 2026-01-21 23:56:14.036 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 23:56:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 21 23:56:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:56:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:14.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:56:14 compute-0 ceph-mon[74318]: pgmap v1135: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 21 23:56:14 compute-0 nova_compute[247516]: 2026-01-21 23:56:14.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:56:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:15 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1737242586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:56:15 compute-0 nova_compute[247516]: 2026-01-21 23:56:15.988 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:56:15 compute-0 nova_compute[247516]: 2026-01-21 23:56:15.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:56:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 21 23:56:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:56:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:16.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:56:16 compute-0 ceph-mon[74318]: pgmap v1136: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 21 23:56:16 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3297490583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:56:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:17.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:56:17 compute-0 nova_compute[247516]: 2026-01-21 23:56:17.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:56:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 334 KiB/s wr, 3 op/s
Jan 21 23:56:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:18.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:18 compute-0 ceph-mon[74318]: pgmap v1137: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 334 KiB/s wr, 3 op/s
Jan 21 23:56:18 compute-0 nova_compute[247516]: 2026-01-21 23:56:18.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:56:18 compute-0 nova_compute[247516]: 2026-01-21 23:56:18.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:56:19 compute-0 nova_compute[247516]: 2026-01-21 23:56:19.066 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:56:19 compute-0 nova_compute[247516]: 2026-01-21 23:56:19.067 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:56:19 compute-0 nova_compute[247516]: 2026-01-21 23:56:19.067 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:56:19 compute-0 nova_compute[247516]: 2026-01-21 23:56:19.068 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 23:56:19 compute-0 nova_compute[247516]: 2026-01-21 23:56:19.068 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:56:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:19.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:56:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1335960546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:56:19 compute-0 nova_compute[247516]: 2026-01-21 23:56:19.527 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:56:19 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1335960546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:56:19 compute-0 nova_compute[247516]: 2026-01-21 23:56:19.763 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 23:56:19 compute-0 nova_compute[247516]: 2026-01-21 23:56:19.765 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5191MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 23:56:19 compute-0 nova_compute[247516]: 2026-01-21 23:56:19.766 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:56:19 compute-0 nova_compute[247516]: 2026-01-21 23:56:19.767 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:56:19 compute-0 nova_compute[247516]: 2026-01-21 23:56:19.914 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 21 23:56:19 compute-0 nova_compute[247516]: 2026-01-21 23:56:19.915 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 23:56:19 compute-0 nova_compute[247516]: 2026-01-21 23:56:19.915 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 23:56:19 compute-0 nova_compute[247516]: 2026-01-21 23:56:19.984 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing inventories for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 21 23:56:20 compute-0 nova_compute[247516]: 2026-01-21 23:56:20.051 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Updating ProviderTree inventory for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 21 23:56:20 compute-0 nova_compute[247516]: 2026-01-21 23:56:20.052 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Updating inventory in ProviderTree for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 21 23:56:20 compute-0 nova_compute[247516]: 2026-01-21 23:56:20.072 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing aggregate associations for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 21 23:56:20 compute-0 nova_compute[247516]: 2026-01-21 23:56:20.094 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing trait associations for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 21 23:56:20 compute-0 nova_compute[247516]: 2026-01-21 23:56:20.135 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:56:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 341 B/s wr, 0 op/s
Jan 21 23:56:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:56:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:20.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:56:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:56:20 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3802674638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:56:20 compute-0 nova_compute[247516]: 2026-01-21 23:56:20.630 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:56:20 compute-0 nova_compute[247516]: 2026-01-21 23:56:20.638 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 23:56:20 compute-0 ceph-mon[74318]: pgmap v1138: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 341 B/s wr, 0 op/s
Jan 21 23:56:20 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3802674638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:56:20 compute-0 nova_compute[247516]: 2026-01-21 23:56:20.684 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 23:56:20 compute-0 nova_compute[247516]: 2026-01-21 23:56:20.687 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 23:56:20 compute-0 nova_compute[247516]: 2026-01-21 23:56:20.688 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:56:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:21.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:56:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:22.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:22 compute-0 ceph-mon[74318]: pgmap v1139: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:56:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:56:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:23.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:56:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.002000062s ======
Jan 21 23:56:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:24.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000062s
Jan 21 23:56:24 compute-0 ceph-mon[74318]: pgmap v1140: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:56:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:25.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2524624498' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:56:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2524624498' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:56:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 10 op/s
Jan 21 23:56:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:26.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2979069543' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:56:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2979069543' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:56:26 compute-0 ceph-mon[74318]: pgmap v1141: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 10 op/s
Jan 21 23:56:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:27.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:56:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3492409455' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:56:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3492409455' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:56:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 10 op/s
Jan 21 23:56:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:28.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:29 compute-0 sudo[261530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:56:29 compute-0 sudo[261530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:29 compute-0 sudo[261530]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:29 compute-0 sudo[261555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:56:29 compute-0 sudo[261555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:29 compute-0 sudo[261555]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:29 compute-0 ceph-mon[74318]: pgmap v1142: 305 pgs: 305 active+clean; 134 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 10 op/s
Jan 21 23:56:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:29.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:30 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3732544826' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:56:30 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3732544826' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:56:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 93 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 767 B/s wr, 29 op/s
Jan 21 23:56:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:56:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:30.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:56:30 compute-0 sudo[261581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:56:30 compute-0 sudo[261581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:30 compute-0 sudo[261581]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:30 compute-0 sudo[261606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:56:30 compute-0 sudo[261606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:30 compute-0 sudo[261606]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:31 compute-0 sudo[261631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:56:31 compute-0 sudo[261631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:31 compute-0 sudo[261631]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:31 compute-0 sudo[261656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:56:31 compute-0 sudo[261656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:31 compute-0 ceph-mon[74318]: pgmap v1143: 305 pgs: 305 active+clean; 93 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 767 B/s wr, 29 op/s
Jan 21 23:56:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:31.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:56:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:56:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:56:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:56:31 compute-0 sudo[261656]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 21 23:56:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 21 23:56:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 21 23:56:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 21 23:56:32 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:56:32 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:56:32 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 21 23:56:32 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 21 23:56:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 42 op/s
Jan 21 23:56:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:56:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:32.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:56:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:56:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:33.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:33 compute-0 ceph-mon[74318]: pgmap v1144: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 42 op/s
Jan 21 23:56:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 42 op/s
Jan 21 23:56:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:34.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:34 compute-0 ceph-mon[74318]: pgmap v1145: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 42 op/s
Jan 21 23:56:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 21 23:56:35 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:56:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 21 23:56:35 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:56:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:56:35 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:56:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:56:35 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:56:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:56:35 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:56:35 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev b2db2885-71e0-497f-af6b-16283e35ad2c does not exist
Jan 21 23:56:35 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 41b3df56-4f20-4c93-8b1f-bf00814c6496 does not exist
Jan 21 23:56:35 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 3b01ac99-3c8f-48f3-a535-b6444ef872aa does not exist
Jan 21 23:56:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:56:35 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:56:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:56:35 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:56:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:56:35 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:56:35 compute-0 sudo[261714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:56:35 compute-0 sudo[261714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:35 compute-0 sudo[261714]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:35 compute-0 sudo[261745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:56:35 compute-0 sudo[261745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:35 compute-0 sudo[261745]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:35.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:35 compute-0 podman[261738]: 2026-01-21 23:56:35.441120876 +0000 UTC m=+0.117023680 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 21 23:56:35 compute-0 sudo[261786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:56:35 compute-0 sudo[261786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:35 compute-0 sudo[261786]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:35 compute-0 sudo[261816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:56:35 compute-0 sudo[261816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:35 compute-0 podman[261882]: 2026-01-21 23:56:35.92287586 +0000 UTC m=+0.047202804 container create 2688f3cdfd31baec5bc6c8d0f43cc2e23524f5e39e1c88d1f196f9ff971daf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:56:35 compute-0 systemd[1]: Started libpod-conmon-2688f3cdfd31baec5bc6c8d0f43cc2e23524f5e39e1c88d1f196f9ff971daf28.scope.
Jan 21 23:56:35 compute-0 podman[261882]: 2026-01-21 23:56:35.900678477 +0000 UTC m=+0.025005451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:56:36 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:56:36 compute-0 podman[261882]: 2026-01-21 23:56:36.045861331 +0000 UTC m=+0.170188365 container init 2688f3cdfd31baec5bc6c8d0f43cc2e23524f5e39e1c88d1f196f9ff971daf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 21 23:56:36 compute-0 podman[261882]: 2026-01-21 23:56:36.057410617 +0000 UTC m=+0.181737561 container start 2688f3cdfd31baec5bc6c8d0f43cc2e23524f5e39e1c88d1f196f9ff971daf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 21 23:56:36 compute-0 podman[261882]: 2026-01-21 23:56:36.061090279 +0000 UTC m=+0.185417263 container attach 2688f3cdfd31baec5bc6c8d0f43cc2e23524f5e39e1c88d1f196f9ff971daf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:56:36 compute-0 hardcore_bohr[261898]: 167 167
Jan 21 23:56:36 compute-0 systemd[1]: libpod-2688f3cdfd31baec5bc6c8d0f43cc2e23524f5e39e1c88d1f196f9ff971daf28.scope: Deactivated successfully.
Jan 21 23:56:36 compute-0 conmon[261898]: conmon 2688f3cdfd31baec5bc6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2688f3cdfd31baec5bc6c8d0f43cc2e23524f5e39e1c88d1f196f9ff971daf28.scope/container/memory.events
Jan 21 23:56:36 compute-0 podman[261882]: 2026-01-21 23:56:36.069149647 +0000 UTC m=+0.193476601 container died 2688f3cdfd31baec5bc6c8d0f43cc2e23524f5e39e1c88d1f196f9ff971daf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 21 23:56:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-953eb33ef820addfdb2b95a71ac5deae6d61bd014ea32b69b908980c6e8bddeb-merged.mount: Deactivated successfully.
Jan 21 23:56:36 compute-0 podman[261882]: 2026-01-21 23:56:36.120397973 +0000 UTC m=+0.244724917 container remove 2688f3cdfd31baec5bc6c8d0f43cc2e23524f5e39e1c88d1f196f9ff971daf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 21 23:56:36 compute-0 systemd[1]: libpod-conmon-2688f3cdfd31baec5bc6c8d0f43cc2e23524f5e39e1c88d1f196f9ff971daf28.scope: Deactivated successfully.
Jan 21 23:56:36 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:56:36 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:56:36 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:56:36 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:56:36 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:56:36 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:56:36 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:56:36 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:56:36 compute-0 podman[261923]: 2026-01-21 23:56:36.367738669 +0000 UTC m=+0.074419870 container create 62f089c1288b53e2ce765f6f74fba3515b25b1cf6e6ae38eedab0ea89ce8a4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lederberg, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:56:36 compute-0 systemd[1]: Started libpod-conmon-62f089c1288b53e2ce765f6f74fba3515b25b1cf6e6ae38eedab0ea89ce8a4dc.scope.
Jan 21 23:56:36 compute-0 podman[261923]: 2026-01-21 23:56:36.339139529 +0000 UTC m=+0.045820780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:56:36 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:56:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23662c2b7e09eea5fd2186d3f8692a70384ba2cf94cf919363b8e12aa8ff8a20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:56:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23662c2b7e09eea5fd2186d3f8692a70384ba2cf94cf919363b8e12aa8ff8a20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:56:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23662c2b7e09eea5fd2186d3f8692a70384ba2cf94cf919363b8e12aa8ff8a20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:56:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23662c2b7e09eea5fd2186d3f8692a70384ba2cf94cf919363b8e12aa8ff8a20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:56:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23662c2b7e09eea5fd2186d3f8692a70384ba2cf94cf919363b8e12aa8ff8a20/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:56:36 compute-0 podman[261923]: 2026-01-21 23:56:36.47182739 +0000 UTC m=+0.178508641 container init 62f089c1288b53e2ce765f6f74fba3515b25b1cf6e6ae38eedab0ea89ce8a4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lederberg, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 21 23:56:36 compute-0 podman[261923]: 2026-01-21 23:56:36.481807696 +0000 UTC m=+0.188488897 container start 62f089c1288b53e2ce765f6f74fba3515b25b1cf6e6ae38eedab0ea89ce8a4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:56:36 compute-0 podman[261923]: 2026-01-21 23:56:36.486315184 +0000 UTC m=+0.192996405 container attach 62f089c1288b53e2ce765f6f74fba3515b25b1cf6e6ae38eedab0ea89ce8a4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:56:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 88 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 42 op/s
Jan 21 23:56:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:36.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:37 compute-0 ceph-mon[74318]: pgmap v1146: 305 pgs: 305 active+clean; 88 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 42 op/s
Jan 21 23:56:37 compute-0 gracious_lederberg[261940]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:56:37 compute-0 gracious_lederberg[261940]: --> relative data size: 1.0
Jan 21 23:56:37 compute-0 gracious_lederberg[261940]: --> All data devices are unavailable
Jan 21 23:56:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:37.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:37 compute-0 systemd[1]: libpod-62f089c1288b53e2ce765f6f74fba3515b25b1cf6e6ae38eedab0ea89ce8a4dc.scope: Deactivated successfully.
Jan 21 23:56:37 compute-0 podman[261923]: 2026-01-21 23:56:37.455756394 +0000 UTC m=+1.162437575 container died 62f089c1288b53e2ce765f6f74fba3515b25b1cf6e6ae38eedab0ea89ce8a4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lederberg, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:56:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-23662c2b7e09eea5fd2186d3f8692a70384ba2cf94cf919363b8e12aa8ff8a20-merged.mount: Deactivated successfully.
Jan 21 23:56:37 compute-0 podman[261923]: 2026-01-21 23:56:37.514084398 +0000 UTC m=+1.220765609 container remove 62f089c1288b53e2ce765f6f74fba3515b25b1cf6e6ae38eedab0ea89ce8a4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 23:56:37 compute-0 systemd[1]: libpod-conmon-62f089c1288b53e2ce765f6f74fba3515b25b1cf6e6ae38eedab0ea89ce8a4dc.scope: Deactivated successfully.
Jan 21 23:56:37 compute-0 sudo[261816]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:37 compute-0 sudo[261968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:56:37 compute-0 sudo[261968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:37 compute-0 sudo[261968]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:37 compute-0 sudo[261993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:56:37 compute-0 sudo[261993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:37 compute-0 sudo[261993]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:56:37 compute-0 sudo[262018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:56:37 compute-0 sudo[262018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:37 compute-0 sudo[262018]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:37 compute-0 sudo[262043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:56:37 compute-0 sudo[262043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:38 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1555965640' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:56:38 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1555965640' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:56:38 compute-0 podman[262108]: 2026-01-21 23:56:38.416416684 +0000 UTC m=+0.047962856 container create 84b0e580029596ed0c6160b0b3dc8e8f681d5a52c5f6dbb6fd305ee3ff4b4cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_curran, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 21 23:56:38 compute-0 systemd[1]: Started libpod-conmon-84b0e580029596ed0c6160b0b3dc8e8f681d5a52c5f6dbb6fd305ee3ff4b4cb4.scope.
Jan 21 23:56:38 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:56:38 compute-0 podman[262108]: 2026-01-21 23:56:38.398997778 +0000 UTC m=+0.030543980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:56:38 compute-0 podman[262108]: 2026-01-21 23:56:38.496269609 +0000 UTC m=+0.127815841 container init 84b0e580029596ed0c6160b0b3dc8e8f681d5a52c5f6dbb6fd305ee3ff4b4cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Jan 21 23:56:38 compute-0 podman[262108]: 2026-01-21 23:56:38.504835002 +0000 UTC m=+0.136381184 container start 84b0e580029596ed0c6160b0b3dc8e8f681d5a52c5f6dbb6fd305ee3ff4b4cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:56:38 compute-0 podman[262108]: 2026-01-21 23:56:38.508906108 +0000 UTC m=+0.140452340 container attach 84b0e580029596ed0c6160b0b3dc8e8f681d5a52c5f6dbb6fd305ee3ff4b4cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_curran, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:56:38 compute-0 fervent_curran[262124]: 167 167
Jan 21 23:56:38 compute-0 systemd[1]: libpod-84b0e580029596ed0c6160b0b3dc8e8f681d5a52c5f6dbb6fd305ee3ff4b4cb4.scope: Deactivated successfully.
Jan 21 23:56:38 compute-0 podman[262108]: 2026-01-21 23:56:38.510735724 +0000 UTC m=+0.142281916 container died 84b0e580029596ed0c6160b0b3dc8e8f681d5a52c5f6dbb6fd305ee3ff4b4cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 23:56:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5af893d5bdd19228dc7b25543276457da1da3a217136441d2f8ccfe1cfa3420-merged.mount: Deactivated successfully.
Jan 21 23:56:38 compute-0 podman[262108]: 2026-01-21 23:56:38.549794725 +0000 UTC m=+0.181340927 container remove 84b0e580029596ed0c6160b0b3dc8e8f681d5a52c5f6dbb6fd305ee3ff4b4cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_curran, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:56:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 88 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Jan 21 23:56:38 compute-0 systemd[1]: libpod-conmon-84b0e580029596ed0c6160b0b3dc8e8f681d5a52c5f6dbb6fd305ee3ff4b4cb4.scope: Deactivated successfully.
Jan 21 23:56:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:38.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 21 23:56:38 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1545579299' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:56:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 21 23:56:38 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1545579299' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:56:38 compute-0 podman[262148]: 2026-01-21 23:56:38.760410271 +0000 UTC m=+0.065184745 container create f6a69f36b466f97bdd0dd143449f296a3a4dcff4b6b8ace143cdd0bfe70c3ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 23:56:38 compute-0 podman[262148]: 2026-01-21 23:56:38.723768285 +0000 UTC m=+0.028542799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:56:38 compute-0 systemd[1]: Started libpod-conmon-f6a69f36b466f97bdd0dd143449f296a3a4dcff4b6b8ace143cdd0bfe70c3ccb.scope.
Jan 21 23:56:38 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a110d3341b49d11fecbfbc9a0404c64011626f3cf580b3a01981f0acca3c81a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a110d3341b49d11fecbfbc9a0404c64011626f3cf580b3a01981f0acca3c81a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a110d3341b49d11fecbfbc9a0404c64011626f3cf580b3a01981f0acca3c81a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a110d3341b49d11fecbfbc9a0404c64011626f3cf580b3a01981f0acca3c81a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:56:38 compute-0 podman[262148]: 2026-01-21 23:56:38.874896452 +0000 UTC m=+0.179670926 container init f6a69f36b466f97bdd0dd143449f296a3a4dcff4b6b8ace143cdd0bfe70c3ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:56:38 compute-0 podman[262148]: 2026-01-21 23:56:38.88330592 +0000 UTC m=+0.188080354 container start f6a69f36b466f97bdd0dd143449f296a3a4dcff4b6b8ace143cdd0bfe70c3ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:56:38 compute-0 podman[262148]: 2026-01-21 23:56:38.886961063 +0000 UTC m=+0.191735597 container attach f6a69f36b466f97bdd0dd143449f296a3a4dcff4b6b8ace143cdd0bfe70c3ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:56:39
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'volumes', 'images', 'backups', 'vms']
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:56:39 compute-0 ceph-mon[74318]: pgmap v1147: 305 pgs: 305 active+clean; 88 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Jan 21 23:56:39 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1545579299' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:56:39 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1545579299' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:56:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:39.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:56:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:56:39 compute-0 pensive_gould[262165]: {
Jan 21 23:56:39 compute-0 pensive_gould[262165]:     "1": [
Jan 21 23:56:39 compute-0 pensive_gould[262165]:         {
Jan 21 23:56:39 compute-0 pensive_gould[262165]:             "devices": [
Jan 21 23:56:39 compute-0 pensive_gould[262165]:                 "/dev/loop3"
Jan 21 23:56:39 compute-0 pensive_gould[262165]:             ],
Jan 21 23:56:39 compute-0 pensive_gould[262165]:             "lv_name": "ceph_lv0",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:             "lv_size": "7511998464",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:             "name": "ceph_lv0",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:             "tags": {
Jan 21 23:56:39 compute-0 pensive_gould[262165]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:                 "ceph.cluster_name": "ceph",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:                 "ceph.crush_device_class": "",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:                 "ceph.encrypted": "0",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:                 "ceph.osd_id": "1",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:                 "ceph.type": "block",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:                 "ceph.vdo": "0"
Jan 21 23:56:39 compute-0 pensive_gould[262165]:             },
Jan 21 23:56:39 compute-0 pensive_gould[262165]:             "type": "block",
Jan 21 23:56:39 compute-0 pensive_gould[262165]:             "vg_name": "ceph_vg0"
Jan 21 23:56:39 compute-0 pensive_gould[262165]:         }
Jan 21 23:56:39 compute-0 pensive_gould[262165]:     ]
Jan 21 23:56:39 compute-0 pensive_gould[262165]: }
Jan 21 23:56:39 compute-0 systemd[1]: libpod-f6a69f36b466f97bdd0dd143449f296a3a4dcff4b6b8ace143cdd0bfe70c3ccb.scope: Deactivated successfully.
Jan 21 23:56:39 compute-0 podman[262148]: 2026-01-21 23:56:39.715484229 +0000 UTC m=+1.020258693 container died f6a69f36b466f97bdd0dd143449f296a3a4dcff4b6b8ace143cdd0bfe70c3ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 21 23:56:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a110d3341b49d11fecbfbc9a0404c64011626f3cf580b3a01981f0acca3c81a-merged.mount: Deactivated successfully.
Jan 21 23:56:39 compute-0 podman[262148]: 2026-01-21 23:56:39.799195903 +0000 UTC m=+1.103970377 container remove f6a69f36b466f97bdd0dd143449f296a3a4dcff4b6b8ace143cdd0bfe70c3ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 21 23:56:39 compute-0 systemd[1]: libpod-conmon-f6a69f36b466f97bdd0dd143449f296a3a4dcff4b6b8ace143cdd0bfe70c3ccb.scope: Deactivated successfully.
Jan 21 23:56:39 compute-0 sudo[262043]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:39 compute-0 sudo[262185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:56:39 compute-0 sudo[262185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:39 compute-0 sudo[262185]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:40 compute-0 sudo[262210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:56:40 compute-0 sudo[262210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:40 compute-0 sudo[262210]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:40 compute-0 sudo[262235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:56:40 compute-0 sudo[262235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:40 compute-0 sudo[262235]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:40 compute-0 sudo[262260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:56:40 compute-0 sudo[262260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:40 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/108755600' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:56:40 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/108755600' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:56:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 88 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 1.6 KiB/s wr, 64 op/s
Jan 21 23:56:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:56:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:40.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:56:40 compute-0 podman[262325]: 2026-01-21 23:56:40.645224348 +0000 UTC m=+0.058571872 container create 782d0146d75f7f973deb6f4f03a4c24d1fb6374fb686bc92bf923db87a984724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_margulis, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:56:40 compute-0 systemd[1]: Started libpod-conmon-782d0146d75f7f973deb6f4f03a4c24d1fb6374fb686bc92bf923db87a984724.scope.
Jan 21 23:56:40 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:56:40 compute-0 podman[262325]: 2026-01-21 23:56:40.623106008 +0000 UTC m=+0.036453632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:56:40 compute-0 podman[262325]: 2026-01-21 23:56:40.729088607 +0000 UTC m=+0.142436151 container init 782d0146d75f7f973deb6f4f03a4c24d1fb6374fb686bc92bf923db87a984724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:56:40 compute-0 podman[262325]: 2026-01-21 23:56:40.735283697 +0000 UTC m=+0.148631221 container start 782d0146d75f7f973deb6f4f03a4c24d1fb6374fb686bc92bf923db87a984724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_margulis, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 23:56:40 compute-0 podman[262325]: 2026-01-21 23:56:40.739005761 +0000 UTC m=+0.152353285 container attach 782d0146d75f7f973deb6f4f03a4c24d1fb6374fb686bc92bf923db87a984724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_margulis, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 23:56:40 compute-0 ecstatic_margulis[262341]: 167 167
Jan 21 23:56:40 compute-0 systemd[1]: libpod-782d0146d75f7f973deb6f4f03a4c24d1fb6374fb686bc92bf923db87a984724.scope: Deactivated successfully.
Jan 21 23:56:40 compute-0 podman[262325]: 2026-01-21 23:56:40.741860089 +0000 UTC m=+0.155207613 container died 782d0146d75f7f973deb6f4f03a4c24d1fb6374fb686bc92bf923db87a984724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 23:56:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cd4fe24cc6e10ca609db892d41a2229e7406e823fc6f1a665129ed33bd138cc-merged.mount: Deactivated successfully.
Jan 21 23:56:40 compute-0 podman[262325]: 2026-01-21 23:56:40.777524066 +0000 UTC m=+0.190871610 container remove 782d0146d75f7f973deb6f4f03a4c24d1fb6374fb686bc92bf923db87a984724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:56:40 compute-0 systemd[1]: libpod-conmon-782d0146d75f7f973deb6f4f03a4c24d1fb6374fb686bc92bf923db87a984724.scope: Deactivated successfully.
Jan 21 23:56:40 compute-0 podman[262364]: 2026-01-21 23:56:40.986214793 +0000 UTC m=+0.040780594 container create 7b7c692c32f5cae602c0909410fdbd8d43542245c3999e32341ec08a9fa731fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 21 23:56:41 compute-0 systemd[1]: Started libpod-conmon-7b7c692c32f5cae602c0909410fdbd8d43542245c3999e32341ec08a9fa731fd.scope.
Jan 21 23:56:41 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8405536e7e70f2442202a55437935cb88cfb571ddbae25218afce82238d76169/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8405536e7e70f2442202a55437935cb88cfb571ddbae25218afce82238d76169/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8405536e7e70f2442202a55437935cb88cfb571ddbae25218afce82238d76169/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8405536e7e70f2442202a55437935cb88cfb571ddbae25218afce82238d76169/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:56:41 compute-0 podman[262364]: 2026-01-21 23:56:40.96888662 +0000 UTC m=+0.023452461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:56:41 compute-0 podman[262364]: 2026-01-21 23:56:41.083765823 +0000 UTC m=+0.138331674 container init 7b7c692c32f5cae602c0909410fdbd8d43542245c3999e32341ec08a9fa731fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 23:56:41 compute-0 podman[262364]: 2026-01-21 23:56:41.088970342 +0000 UTC m=+0.143536153 container start 7b7c692c32f5cae602c0909410fdbd8d43542245c3999e32341ec08a9fa731fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:56:41 compute-0 podman[262364]: 2026-01-21 23:56:41.09213085 +0000 UTC m=+0.146696701 container attach 7b7c692c32f5cae602c0909410fdbd8d43542245c3999e32341ec08a9fa731fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 21 23:56:41 compute-0 ceph-mon[74318]: pgmap v1148: 305 pgs: 305 active+clean; 88 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 1.6 KiB/s wr, 64 op/s
Jan 21 23:56:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:41.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:41 compute-0 unruffled_bhaskara[262381]: {
Jan 21 23:56:41 compute-0 unruffled_bhaskara[262381]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:56:41 compute-0 unruffled_bhaskara[262381]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:56:41 compute-0 unruffled_bhaskara[262381]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:56:41 compute-0 unruffled_bhaskara[262381]:         "osd_id": 1,
Jan 21 23:56:41 compute-0 unruffled_bhaskara[262381]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:56:41 compute-0 unruffled_bhaskara[262381]:         "type": "bluestore"
Jan 21 23:56:41 compute-0 unruffled_bhaskara[262381]:     }
Jan 21 23:56:41 compute-0 unruffled_bhaskara[262381]: }
Jan 21 23:56:41 compute-0 systemd[1]: libpod-7b7c692c32f5cae602c0909410fdbd8d43542245c3999e32341ec08a9fa731fd.scope: Deactivated successfully.
Jan 21 23:56:41 compute-0 podman[262364]: 2026-01-21 23:56:41.994325352 +0000 UTC m=+1.048891183 container died 7b7c692c32f5cae602c0909410fdbd8d43542245c3999e32341ec08a9fa731fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 21 23:56:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8405536e7e70f2442202a55437935cb88cfb571ddbae25218afce82238d76169-merged.mount: Deactivated successfully.
Jan 21 23:56:42 compute-0 podman[262364]: 2026-01-21 23:56:42.055537594 +0000 UTC m=+1.110103405 container remove 7b7c692c32f5cae602c0909410fdbd8d43542245c3999e32341ec08a9fa731fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:56:42 compute-0 systemd[1]: libpod-conmon-7b7c692c32f5cae602c0909410fdbd8d43542245c3999e32341ec08a9fa731fd.scope: Deactivated successfully.
Jan 21 23:56:42 compute-0 sudo[262260]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:56:42 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:56:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:56:42 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:56:42 compute-0 podman[262403]: 2026-01-21 23:56:42.117227311 +0000 UTC m=+0.091558166 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 23:56:42 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 49aa2f15-3588-46f7-be00-8a4a9ae5856f does not exist
Jan 21 23:56:42 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 11b3de6b-ca62-4bb9-8003-6963b4b0f0ee does not exist
Jan 21 23:56:42 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev b317baed-f266-4b9d-a63f-f06dc4046eb6 does not exist
Jan 21 23:56:42 compute-0 sudo[262435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:56:42 compute-0 sudo[262435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:42 compute-0 sudo[262435]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:42 compute-0 sudo[262460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:56:42 compute-0 sudo[262460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:42 compute-0 sudo[262460]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 73 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 852 B/s wr, 49 op/s
Jan 21 23:56:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:42.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:56:43 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:56:43 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:56:43 compute-0 ceph-mon[74318]: pgmap v1149: 305 pgs: 305 active+clean; 73 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 852 B/s wr, 49 op/s
Jan 21 23:56:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:56:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:43.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:56:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 73 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 511 B/s wr, 35 op/s
Jan 21 23:56:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:56:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:44.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:56:44 compute-0 ceph-mon[74318]: pgmap v1150: 305 pgs: 305 active+clean; 73 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 511 B/s wr, 35 op/s
Jan 21 23:56:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:45.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 42 op/s
Jan 21 23:56:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:46.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:46 compute-0 ceph-mon[74318]: pgmap v1151: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 42 op/s
Jan 21 23:56:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:47.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:56:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 42 op/s
Jan 21 23:56:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:48.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:48 compute-0 ceph-mon[74318]: pgmap v1152: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 KiB/s wr, 42 op/s
Jan 21 23:56:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:56:48.754 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:56:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:56:48.755 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:56:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:56:48.756 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:56:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:49.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:49 compute-0 sudo[262489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:56:49 compute-0 sudo[262489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:49 compute-0 sudo[262489]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:49 compute-0 sudo[262514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:56:49 compute-0 sudo[262514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:56:49 compute-0 sudo[262514]: pam_unix(sudo:session): session closed for user root
Jan 21 23:56:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 41 MiB data, 238 MiB used, 21 GiB / 21 GiB avail; 99 KiB/s rd, 1.4 KiB/s wr, 156 op/s
Jan 21 23:56:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:50.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:50 compute-0 ceph-mon[74318]: pgmap v1153: 305 pgs: 305 active+clean; 41 MiB data, 238 MiB used, 21 GiB / 21 GiB avail; 99 KiB/s rd, 1.4 KiB/s wr, 156 op/s
Jan 21 23:56:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:51.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 105 KiB/s rd, 938 B/s wr, 175 op/s
Jan 21 23:56:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:52.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:52 compute-0 ceph-mon[74318]: pgmap v1154: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 105 KiB/s rd, 938 B/s wr, 175 op/s
Jan 21 23:56:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:56:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:53.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:56:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 938 B/s wr, 171 op/s
Jan 21 23:56:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.005000155s ======
Jan 21 23:56:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:54.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000155s
Jan 21 23:56:54 compute-0 ceph-mon[74318]: pgmap v1155: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 938 B/s wr, 171 op/s
Jan 21 23:56:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:56:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:55.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:56:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 112 KiB/s rd, 938 B/s wr, 186 op/s
Jan 21 23:56:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:56.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:56 compute-0 ceph-mon[74318]: pgmap v1156: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 112 KiB/s rd, 938 B/s wr, 186 op/s
Jan 21 23:56:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:57.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:56:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 108 KiB/s rd, 341 B/s wr, 179 op/s
Jan 21 23:56:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:56:58.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:56:58 compute-0 ceph-mon[74318]: pgmap v1157: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 108 KiB/s rd, 341 B/s wr, 179 op/s
Jan 21 23:56:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:56:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:56:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:56:59.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:00 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3226517775' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:57:00 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3226517775' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:57:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 109 KiB/s rd, 341 B/s wr, 181 op/s
Jan 21 23:57:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:00.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:01 compute-0 ceph-mon[74318]: pgmap v1158: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 109 KiB/s rd, 341 B/s wr, 181 op/s
Jan 21 23:57:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:01.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 0 B/s wr, 68 op/s
Jan 21 23:57:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:02.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:02 compute-0 ceph-mon[74318]: pgmap v1159: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 0 B/s wr, 68 op/s
Jan 21 23:57:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:57:03 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:57:03.320 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 23:57:03 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:57:03.322 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 23:57:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:03.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Jan 21 23:57:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:04.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:04 compute-0 ceph-mon[74318]: pgmap v1160: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Jan 21 23:57:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:05.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:06 compute-0 podman[262547]: 2026-01-21 23:57:06.008192969 +0000 UTC m=+0.110317783 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 21 23:57:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 255 B/s wr, 28 op/s
Jan 21 23:57:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:06.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:06 compute-0 ceph-mon[74318]: pgmap v1161: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 255 B/s wr, 28 op/s
Jan 21 23:57:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:07.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:57:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 21 23:57:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:08.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:08 compute-0 ceph-mon[74318]: pgmap v1162: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 21 23:57:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:57:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:57:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:57:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:57:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:57:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:57:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:09.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:09 compute-0 sudo[262578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:57:09 compute-0 sudo[262578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:09 compute-0 sudo[262578]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:09 compute-0 sudo[262603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:57:09 compute-0 sudo[262603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:09 compute-0 sudo[262603]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 21 23:57:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:10.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:10 compute-0 ceph-mon[74318]: pgmap v1163: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 21 23:57:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:11.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:11 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3727921245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:57:12 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:57:12.324 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 23:57:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 255 B/s wr, 11 op/s
Jan 21 23:57:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:12.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:12 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2427224066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:57:12 compute-0 ceph-mon[74318]: pgmap v1164: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 255 B/s wr, 11 op/s
Jan 21 23:57:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:57:12 compute-0 podman[262629]: 2026-01-21 23:57:12.935200679 +0000 UTC m=+0.056998694 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 23:57:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:13.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:13 compute-0 nova_compute[247516]: 2026-01-21 23:57:13.689 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:57:13 compute-0 nova_compute[247516]: 2026-01-21 23:57:13.690 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 23:57:13 compute-0 nova_compute[247516]: 2026-01-21 23:57:13.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:57:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 255 B/s wr, 10 op/s
Jan 21 23:57:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:14.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:14.653542) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039834653688, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2145, "num_deletes": 254, "total_data_size": 3930024, "memory_usage": 3988304, "flush_reason": "Manual Compaction"}
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039834680899, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3779947, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24176, "largest_seqno": 26320, "table_properties": {"data_size": 3770409, "index_size": 5968, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19954, "raw_average_key_size": 20, "raw_value_size": 3751180, "raw_average_value_size": 3835, "num_data_blocks": 265, "num_entries": 978, "num_filter_entries": 978, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769039640, "oldest_key_time": 1769039640, "file_creation_time": 1769039834, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 27385 microseconds, and 13761 cpu microseconds.
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:14.680973) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3779947 bytes OK
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:14.681072) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:14.683730) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:14.683799) EVENT_LOG_v1 {"time_micros": 1769039834683785, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:14.683832) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 3921295, prev total WAL file size 3922002, number of live WAL files 2.
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:14.685751) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3691KB)], [56(9007KB)]
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039834685919, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 13003487, "oldest_snapshot_seqno": -1}
Jan 21 23:57:14 compute-0 ceph-mon[74318]: pgmap v1165: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 255 B/s wr, 10 op/s
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5312 keys, 11041111 bytes, temperature: kUnknown
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039834790758, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 11041111, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11002782, "index_size": 23947, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 132683, "raw_average_key_size": 24, "raw_value_size": 10904034, "raw_average_value_size": 2052, "num_data_blocks": 988, "num_entries": 5312, "num_filter_entries": 5312, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769039834, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:14.791068) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 11041111 bytes
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:14.792914) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.9 rd, 105.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.8 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 5840, records dropped: 528 output_compression: NoCompression
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:14.792953) EVENT_LOG_v1 {"time_micros": 1769039834792932, "job": 30, "event": "compaction_finished", "compaction_time_micros": 104921, "compaction_time_cpu_micros": 48296, "output_level": 6, "num_output_files": 1, "total_output_size": 11041111, "num_input_records": 5840, "num_output_records": 5312, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039834794538, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039834797075, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:14.685646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:14.797146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:14.797151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:14.797153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:14.797156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:57:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:14.797157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:57:14 compute-0 nova_compute[247516]: 2026-01-21 23:57:14.987 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:57:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:15.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:15 compute-0 nova_compute[247516]: 2026-01-21 23:57:15.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:57:15 compute-0 nova_compute[247516]: 2026-01-21 23:57:15.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:57:15 compute-0 nova_compute[247516]: 2026-01-21 23:57:15.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 23:57:15 compute-0 nova_compute[247516]: 2026-01-21 23:57:15.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 23:57:16 compute-0 nova_compute[247516]: 2026-01-21 23:57:16.012 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 23:57:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 255 B/s wr, 10 op/s
Jan 21 23:57:16 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2523963336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:57:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:16.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:16 compute-0 nova_compute[247516]: 2026-01-21 23:57:16.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:57:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:17.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:17 compute-0 ceph-mon[74318]: pgmap v1166: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 255 B/s wr, 10 op/s
Jan 21 23:57:17 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2889198332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:57:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:57:17 compute-0 nova_compute[247516]: 2026-01-21 23:57:17.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:57:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:18.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:18 compute-0 ceph-mon[74318]: pgmap v1167: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:18 compute-0 nova_compute[247516]: 2026-01-21 23:57:18.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:57:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:19.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:19 compute-0 nova_compute[247516]: 2026-01-21 23:57:19.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:57:19 compute-0 nova_compute[247516]: 2026-01-21 23:57:19.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:57:20 compute-0 nova_compute[247516]: 2026-01-21 23:57:20.024 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:57:20 compute-0 nova_compute[247516]: 2026-01-21 23:57:20.025 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:57:20 compute-0 nova_compute[247516]: 2026-01-21 23:57:20.025 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:57:20 compute-0 nova_compute[247516]: 2026-01-21 23:57:20.026 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 23:57:20 compute-0 nova_compute[247516]: 2026-01-21 23:57:20.026 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:57:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:57:20 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2489080031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:57:20 compute-0 nova_compute[247516]: 2026-01-21 23:57:20.534 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:57:20 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2489080031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:57:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:20.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:20 compute-0 nova_compute[247516]: 2026-01-21 23:57:20.793 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 23:57:20 compute-0 nova_compute[247516]: 2026-01-21 23:57:20.795 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5197MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 23:57:20 compute-0 nova_compute[247516]: 2026-01-21 23:57:20.795 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:57:20 compute-0 nova_compute[247516]: 2026-01-21 23:57:20.795 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:57:21 compute-0 nova_compute[247516]: 2026-01-21 23:57:21.044 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 21 23:57:21 compute-0 nova_compute[247516]: 2026-01-21 23:57:21.045 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 23:57:21 compute-0 nova_compute[247516]: 2026-01-21 23:57:21.045 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 23:57:21 compute-0 nova_compute[247516]: 2026-01-21 23:57:21.083 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:57:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:57:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:21.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:57:21 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:57:21 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1125126795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:57:21 compute-0 nova_compute[247516]: 2026-01-21 23:57:21.576 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:57:21 compute-0 ceph-mon[74318]: pgmap v1168: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1125126795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:57:21 compute-0 nova_compute[247516]: 2026-01-21 23:57:21.585 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 23:57:21 compute-0 nova_compute[247516]: 2026-01-21 23:57:21.603 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 23:57:21 compute-0 nova_compute[247516]: 2026-01-21 23:57:21.606 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 23:57:21 compute-0 nova_compute[247516]: 2026-01-21 23:57:21.606 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:57:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:22.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:22 compute-0 ceph-mon[74318]: pgmap v1169: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:57:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:23.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:24.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:24 compute-0 ceph-mon[74318]: pgmap v1170: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:25.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/138375614' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:57:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/138375614' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:57:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:26.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:26 compute-0 ceph-mon[74318]: pgmap v1171: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:27.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:27.839535) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039847839679, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 378, "num_deletes": 256, "total_data_size": 220333, "memory_usage": 228120, "flush_reason": "Manual Compaction"}
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039847843476, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 218477, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26321, "largest_seqno": 26698, "table_properties": {"data_size": 216220, "index_size": 357, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5295, "raw_average_key_size": 17, "raw_value_size": 211729, "raw_average_value_size": 682, "num_data_blocks": 16, "num_entries": 310, "num_filter_entries": 310, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769039834, "oldest_key_time": 1769039834, "file_creation_time": 1769039847, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 3916 microseconds, and 1695 cpu microseconds.
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:27.843509) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 218477 bytes OK
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:27.843526) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:27.844608) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:27.844624) EVENT_LOG_v1 {"time_micros": 1769039847844620, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:27.844646) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 217863, prev total WAL file size 217863, number of live WAL files 2.
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:27.845212) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(213KB)], [59(10MB)]
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039847845328, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 11259588, "oldest_snapshot_seqno": -1}
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5102 keys, 11173558 bytes, temperature: kUnknown
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039847941227, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 11173558, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11135769, "index_size": 23919, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12805, "raw_key_size": 129519, "raw_average_key_size": 25, "raw_value_size": 11039856, "raw_average_value_size": 2163, "num_data_blocks": 983, "num_entries": 5102, "num_filter_entries": 5102, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769039847, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:27.941599) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 11173558 bytes
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:27.942997) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.3 rd, 116.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.5 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(102.7) write-amplify(51.1) OK, records in: 5622, records dropped: 520 output_compression: NoCompression
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:27.943018) EVENT_LOG_v1 {"time_micros": 1769039847943008, "job": 32, "event": "compaction_finished", "compaction_time_micros": 95991, "compaction_time_cpu_micros": 48823, "output_level": 6, "num_output_files": 1, "total_output_size": 11173558, "num_input_records": 5622, "num_output_records": 5102, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039847943223, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769039847945604, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:27.845038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:27.945736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:27.945746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:27.945748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:27.945751) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:57:27 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/21-23:57:27.945754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 23:57:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:28.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:28 compute-0 ceph-mon[74318]: pgmap v1172: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:29.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:30 compute-0 sudo[262702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:57:30 compute-0 sudo[262702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:30 compute-0 sudo[262702]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:30 compute-0 sudo[262727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:57:30 compute-0 sudo[262727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:30 compute-0 sudo[262727]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:30.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:30 compute-0 ceph-mon[74318]: pgmap v1173: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:31.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:32.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:32 compute-0 ceph-mon[74318]: pgmap v1174: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:57:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:33.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:34.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:34 compute-0 ceph-mon[74318]: pgmap v1175: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:35.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:36.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:36 compute-0 ceph-mon[74318]: pgmap v1176: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:37 compute-0 podman[262755]: 2026-01-21 23:57:37.042510331 +0000 UTC m=+0.145680961 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 21 23:57:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:37.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:57:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:38.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:38 compute-0 ceph-mon[74318]: pgmap v1177: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:57:39
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups']
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:57:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:39.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:57:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:57:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:40.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:40 compute-0 ceph-mon[74318]: pgmap v1178: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:41.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:42.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:42 compute-0 ceph-mon[74318]: pgmap v1179: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:42 compute-0 sudo[262785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:57:42 compute-0 sudo[262785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:42 compute-0 sudo[262785]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:57:42 compute-0 sudo[262810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:57:42 compute-0 sudo[262810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:42 compute-0 sudo[262810]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:42 compute-0 sudo[262835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:57:42 compute-0 sudo[262835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:42 compute-0 sudo[262835]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:43 compute-0 sudo[262860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:57:43 compute-0 sudo[262860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:43 compute-0 podman[262884]: 2026-01-21 23:57:43.109897878 +0000 UTC m=+0.076872555 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 23:57:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:43.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 21 23:57:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 23:57:43 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 21 23:57:43 compute-0 sudo[262860]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:57:43 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:57:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:57:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:57:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:57:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:57:43 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev a20ef2c3-7ddc-4a97-8fea-00e33db0e622 does not exist
Jan 21 23:57:43 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 1220c9d3-f457-49eb-9755-68a420255ee9 does not exist
Jan 21 23:57:43 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev add08552-1cdf-472d-83ca-e0c1d0e1c459 does not exist
Jan 21 23:57:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:57:43 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:57:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:57:43 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:57:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:57:43 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:57:43 compute-0 sudo[262937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:57:43 compute-0 sudo[262937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:43 compute-0 sudo[262937]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:43 compute-0 sudo[262962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:57:43 compute-0 sudo[262962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:43 compute-0 sudo[262962]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:43 compute-0 sudo[262987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:57:43 compute-0 sudo[262987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:43 compute-0 sudo[262987]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:44 compute-0 sudo[263012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:57:44 compute-0 sudo[263012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:44 compute-0 podman[263077]: 2026-01-21 23:57:44.409778378 +0000 UTC m=+0.064024560 container create e33dd689f77cfb09d9ee86a55cf1cb5304894df6009814ee6a2c1a6b4648c3c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 21 23:57:44 compute-0 systemd[1]: Started libpod-conmon-e33dd689f77cfb09d9ee86a55cf1cb5304894df6009814ee6a2c1a6b4648c3c1.scope.
Jan 21 23:57:44 compute-0 podman[263077]: 2026-01-21 23:57:44.387607896 +0000 UTC m=+0.041854058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:57:44 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:57:44 compute-0 podman[263077]: 2026-01-21 23:57:44.512745994 +0000 UTC m=+0.166992216 container init e33dd689f77cfb09d9ee86a55cf1cb5304894df6009814ee6a2c1a6b4648c3c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_beaver, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 21 23:57:44 compute-0 podman[263077]: 2026-01-21 23:57:44.523364011 +0000 UTC m=+0.177610153 container start e33dd689f77cfb09d9ee86a55cf1cb5304894df6009814ee6a2c1a6b4648c3c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 23:57:44 compute-0 podman[263077]: 2026-01-21 23:57:44.526841127 +0000 UTC m=+0.181087319 container attach e33dd689f77cfb09d9ee86a55cf1cb5304894df6009814ee6a2c1a6b4648c3c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_beaver, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 21 23:57:44 compute-0 frosty_beaver[263094]: 167 167
Jan 21 23:57:44 compute-0 systemd[1]: libpod-e33dd689f77cfb09d9ee86a55cf1cb5304894df6009814ee6a2c1a6b4648c3c1.scope: Deactivated successfully.
Jan 21 23:57:44 compute-0 conmon[263094]: conmon e33dd689f77cfb09d9ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e33dd689f77cfb09d9ee86a55cf1cb5304894df6009814ee6a2c1a6b4648c3c1.scope/container/memory.events
Jan 21 23:57:44 compute-0 podman[263077]: 2026-01-21 23:57:44.532184672 +0000 UTC m=+0.186430874 container died e33dd689f77cfb09d9ee86a55cf1cb5304894df6009814ee6a2c1a6b4648c3c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_beaver, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:57:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1b835959cd7d6f2ac4416f8274b5e16c3fa7974d6cd1ac4c3a9fe1b68c535fa-merged.mount: Deactivated successfully.
Jan 21 23:57:44 compute-0 podman[263077]: 2026-01-21 23:57:44.585827301 +0000 UTC m=+0.240073483 container remove e33dd689f77cfb09d9ee86a55cf1cb5304894df6009814ee6a2c1a6b4648c3c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_beaver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 21 23:57:44 compute-0 systemd[1]: libpod-conmon-e33dd689f77cfb09d9ee86a55cf1cb5304894df6009814ee6a2c1a6b4648c3c1.scope: Deactivated successfully.
Jan 21 23:57:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:44.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:57:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:57:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:57:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:57:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:57:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:57:44 compute-0 ceph-mon[74318]: pgmap v1180: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:44 compute-0 podman[263118]: 2026-01-21 23:57:44.861651913 +0000 UTC m=+0.074440560 container create 9789531613b05c87f1ae91a99ca5eaacb3e99ca77da8977cc059c2071504a33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:57:44 compute-0 systemd[1]: Started libpod-conmon-9789531613b05c87f1ae91a99ca5eaacb3e99ca77da8977cc059c2071504a33c.scope.
Jan 21 23:57:44 compute-0 podman[263118]: 2026-01-21 23:57:44.83389429 +0000 UTC m=+0.046682987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:57:44 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b8650e299ba5d05d91c01fd5f120159e0f1c44d7bd49eb2f936a78dcfc0ebb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b8650e299ba5d05d91c01fd5f120159e0f1c44d7bd49eb2f936a78dcfc0ebb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b8650e299ba5d05d91c01fd5f120159e0f1c44d7bd49eb2f936a78dcfc0ebb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b8650e299ba5d05d91c01fd5f120159e0f1c44d7bd49eb2f936a78dcfc0ebb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b8650e299ba5d05d91c01fd5f120159e0f1c44d7bd49eb2f936a78dcfc0ebb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:57:44 compute-0 podman[263118]: 2026-01-21 23:57:44.96337245 +0000 UTC m=+0.176161077 container init 9789531613b05c87f1ae91a99ca5eaacb3e99ca77da8977cc059c2071504a33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 23:57:44 compute-0 podman[263118]: 2026-01-21 23:57:44.976586427 +0000 UTC m=+0.189375034 container start 9789531613b05c87f1ae91a99ca5eaacb3e99ca77da8977cc059c2071504a33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:57:44 compute-0 podman[263118]: 2026-01-21 23:57:44.979888239 +0000 UTC m=+0.192676886 container attach 9789531613b05c87f1ae91a99ca5eaacb3e99ca77da8977cc059c2071504a33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 23:57:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:45.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:45 compute-0 epic_khorana[263135]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:57:45 compute-0 epic_khorana[263135]: --> relative data size: 1.0
Jan 21 23:57:45 compute-0 epic_khorana[263135]: --> All data devices are unavailable
Jan 21 23:57:45 compute-0 systemd[1]: libpod-9789531613b05c87f1ae91a99ca5eaacb3e99ca77da8977cc059c2071504a33c.scope: Deactivated successfully.
Jan 21 23:57:45 compute-0 podman[263118]: 2026-01-21 23:57:45.843544215 +0000 UTC m=+1.056332862 container died 9789531613b05c87f1ae91a99ca5eaacb3e99ca77da8977cc059c2071504a33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:57:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3b8650e299ba5d05d91c01fd5f120159e0f1c44d7bd49eb2f936a78dcfc0ebb-merged.mount: Deactivated successfully.
Jan 21 23:57:45 compute-0 podman[263118]: 2026-01-21 23:57:45.92174295 +0000 UTC m=+1.134531587 container remove 9789531613b05c87f1ae91a99ca5eaacb3e99ca77da8977cc059c2071504a33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khorana, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:57:45 compute-0 systemd[1]: libpod-conmon-9789531613b05c87f1ae91a99ca5eaacb3e99ca77da8977cc059c2071504a33c.scope: Deactivated successfully.
Jan 21 23:57:45 compute-0 sudo[263012]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:46 compute-0 sudo[263162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:57:46 compute-0 sudo[263162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:46 compute-0 sudo[263162]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:46 compute-0 sudo[263187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:57:46 compute-0 sudo[263187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:46 compute-0 sudo[263187]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:46 compute-0 sudo[263212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:57:46 compute-0 sudo[263212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:46 compute-0 sudo[263212]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:46 compute-0 sudo[263237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:57:46 compute-0 sudo[263237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:46.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:46 compute-0 ceph-mon[74318]: pgmap v1181: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:46 compute-0 podman[263301]: 2026-01-21 23:57:46.727647101 +0000 UTC m=+0.070131538 container create 8061298658a2bfb555099e0736a8cd051474a4b024844af96402e64317dfc1c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:57:46 compute-0 systemd[1]: Started libpod-conmon-8061298658a2bfb555099e0736a8cd051474a4b024844af96402e64317dfc1c5.scope.
Jan 21 23:57:46 compute-0 podman[263301]: 2026-01-21 23:57:46.701237919 +0000 UTC m=+0.043722396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:57:46 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:57:46 compute-0 podman[263301]: 2026-01-21 23:57:46.818276618 +0000 UTC m=+0.160761055 container init 8061298658a2bfb555099e0736a8cd051474a4b024844af96402e64317dfc1c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:57:46 compute-0 podman[263301]: 2026-01-21 23:57:46.830982678 +0000 UTC m=+0.173467085 container start 8061298658a2bfb555099e0736a8cd051474a4b024844af96402e64317dfc1c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 23:57:46 compute-0 podman[263301]: 2026-01-21 23:57:46.834689952 +0000 UTC m=+0.177174379 container attach 8061298658a2bfb555099e0736a8cd051474a4b024844af96402e64317dfc1c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 23:57:46 compute-0 vibrant_greider[263317]: 167 167
Jan 21 23:57:46 compute-0 systemd[1]: libpod-8061298658a2bfb555099e0736a8cd051474a4b024844af96402e64317dfc1c5.scope: Deactivated successfully.
Jan 21 23:57:46 compute-0 podman[263301]: 2026-01-21 23:57:46.83785661 +0000 UTC m=+0.180341017 container died 8061298658a2bfb555099e0736a8cd051474a4b024844af96402e64317dfc1c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:57:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-823bece143f6e984e36de35f5f8e0568514efb394f824ebf81cb3a888e14185a-merged.mount: Deactivated successfully.
Jan 21 23:57:46 compute-0 podman[263301]: 2026-01-21 23:57:46.883742971 +0000 UTC m=+0.226227378 container remove 8061298658a2bfb555099e0736a8cd051474a4b024844af96402e64317dfc1c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 23:57:46 compute-0 systemd[1]: libpod-conmon-8061298658a2bfb555099e0736a8cd051474a4b024844af96402e64317dfc1c5.scope: Deactivated successfully.
Jan 21 23:57:47 compute-0 podman[263341]: 2026-01-21 23:57:47.122709209 +0000 UTC m=+0.079811405 container create cd99fe33dfe2e880afbac25d628c1ede31de6fd0a3d0fd6bd610a96431d7a7ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 23:57:47 compute-0 systemd[1]: Started libpod-conmon-cd99fe33dfe2e880afbac25d628c1ede31de6fd0a3d0fd6bd610a96431d7a7ed.scope.
Jan 21 23:57:47 compute-0 podman[263341]: 2026-01-21 23:57:47.092070416 +0000 UTC m=+0.049172672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:57:47 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4450e87abf0695cac576fa353da6de0b7aa01c401e063db2c43e62f16fa7089e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4450e87abf0695cac576fa353da6de0b7aa01c401e063db2c43e62f16fa7089e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4450e87abf0695cac576fa353da6de0b7aa01c401e063db2c43e62f16fa7089e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4450e87abf0695cac576fa353da6de0b7aa01c401e063db2c43e62f16fa7089e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:57:47 compute-0 podman[263341]: 2026-01-21 23:57:47.223992313 +0000 UTC m=+0.181094509 container init cd99fe33dfe2e880afbac25d628c1ede31de6fd0a3d0fd6bd610a96431d7a7ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 23:57:47 compute-0 podman[263341]: 2026-01-21 23:57:47.235192198 +0000 UTC m=+0.192294404 container start cd99fe33dfe2e880afbac25d628c1ede31de6fd0a3d0fd6bd610a96431d7a7ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 21 23:57:47 compute-0 podman[263341]: 2026-01-21 23:57:47.239413057 +0000 UTC m=+0.196515273 container attach cd99fe33dfe2e880afbac25d628c1ede31de6fd0a3d0fd6bd610a96431d7a7ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 21 23:57:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:47.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:57:47 compute-0 jovial_haibt[263358]: {
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:     "1": [
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:         {
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:             "devices": [
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:                 "/dev/loop3"
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:             ],
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:             "lv_name": "ceph_lv0",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:             "lv_size": "7511998464",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:             "name": "ceph_lv0",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:             "tags": {
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:                 "ceph.cluster_name": "ceph",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:                 "ceph.crush_device_class": "",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:                 "ceph.encrypted": "0",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:                 "ceph.osd_id": "1",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:                 "ceph.type": "block",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:                 "ceph.vdo": "0"
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:             },
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:             "type": "block",
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:             "vg_name": "ceph_vg0"
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:         }
Jan 21 23:57:47 compute-0 jovial_haibt[263358]:     ]
Jan 21 23:57:47 compute-0 jovial_haibt[263358]: }
Jan 21 23:57:48 compute-0 systemd[1]: libpod-cd99fe33dfe2e880afbac25d628c1ede31de6fd0a3d0fd6bd610a96431d7a7ed.scope: Deactivated successfully.
Jan 21 23:57:48 compute-0 podman[263341]: 2026-01-21 23:57:48.035088803 +0000 UTC m=+0.992191039 container died cd99fe33dfe2e880afbac25d628c1ede31de6fd0a3d0fd6bd610a96431d7a7ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 23:57:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4450e87abf0695cac576fa353da6de0b7aa01c401e063db2c43e62f16fa7089e-merged.mount: Deactivated successfully.
Jan 21 23:57:48 compute-0 podman[263341]: 2026-01-21 23:57:48.110375318 +0000 UTC m=+1.067477514 container remove cd99fe33dfe2e880afbac25d628c1ede31de6fd0a3d0fd6bd610a96431d7a7ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 21 23:57:48 compute-0 systemd[1]: libpod-conmon-cd99fe33dfe2e880afbac25d628c1ede31de6fd0a3d0fd6bd610a96431d7a7ed.scope: Deactivated successfully.
Jan 21 23:57:48 compute-0 sudo[263237]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:48 compute-0 sudo[263380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:57:48 compute-0 sudo[263380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:48 compute-0 sudo[263380]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:48 compute-0 sudo[263405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:57:48 compute-0 sudo[263405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:48 compute-0 sudo[263405]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:48 compute-0 sudo[263430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:57:48 compute-0 sudo[263430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:48 compute-0 sudo[263430]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:48 compute-0 sudo[263455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:57:48 compute-0 sudo[263455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:48.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:48 compute-0 ceph-mon[74318]: pgmap v1182: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:57:48.755 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:57:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:57:48.758 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:57:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:57:48.759 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:57:48 compute-0 podman[263522]: 2026-01-21 23:57:48.923411039 +0000 UTC m=+0.072159810 container create 3775e557a4e4b3a8545acedba9b541ee8a826c94789b73ba89f30cedd0e574c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 21 23:57:48 compute-0 systemd[1]: Started libpod-conmon-3775e557a4e4b3a8545acedba9b541ee8a826c94789b73ba89f30cedd0e574c3.scope.
Jan 21 23:57:48 compute-0 podman[263522]: 2026-01-21 23:57:48.895243133 +0000 UTC m=+0.043991954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:57:49 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:57:49 compute-0 podman[263522]: 2026-01-21 23:57:49.02393809 +0000 UTC m=+0.172686881 container init 3775e557a4e4b3a8545acedba9b541ee8a826c94789b73ba89f30cedd0e574c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:57:49 compute-0 podman[263522]: 2026-01-21 23:57:49.034823685 +0000 UTC m=+0.183572466 container start 3775e557a4e4b3a8545acedba9b541ee8a826c94789b73ba89f30cedd0e574c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 23:57:49 compute-0 podman[263522]: 2026-01-21 23:57:49.038414145 +0000 UTC m=+0.187162916 container attach 3775e557a4e4b3a8545acedba9b541ee8a826c94789b73ba89f30cedd0e574c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:57:49 compute-0 exciting_mccarthy[263538]: 167 167
Jan 21 23:57:49 compute-0 systemd[1]: libpod-3775e557a4e4b3a8545acedba9b541ee8a826c94789b73ba89f30cedd0e574c3.scope: Deactivated successfully.
Jan 21 23:57:49 compute-0 podman[263522]: 2026-01-21 23:57:49.04408308 +0000 UTC m=+0.192831811 container died 3775e557a4e4b3a8545acedba9b541ee8a826c94789b73ba89f30cedd0e574c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 23:57:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-719c23c50c00ceeb831d5dda27c197e5fcffd0655af922efd1529bc7d32d70c6-merged.mount: Deactivated successfully.
Jan 21 23:57:49 compute-0 podman[263522]: 2026-01-21 23:57:49.084793741 +0000 UTC m=+0.233542512 container remove 3775e557a4e4b3a8545acedba9b541ee8a826c94789b73ba89f30cedd0e574c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:57:49 compute-0 systemd[1]: libpod-conmon-3775e557a4e4b3a8545acedba9b541ee8a826c94789b73ba89f30cedd0e574c3.scope: Deactivated successfully.
Jan 21 23:57:49 compute-0 podman[263562]: 2026-01-21 23:57:49.311145281 +0000 UTC m=+0.051628698 container create 92999d393dd3db29f7cd6da6ae6cfa9a7455a0cb2d21c19f03b0ca75438dfb83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 23:57:49 compute-0 systemd[1]: Started libpod-conmon-92999d393dd3db29f7cd6da6ae6cfa9a7455a0cb2d21c19f03b0ca75438dfb83.scope.
Jan 21 23:57:49 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffc5c05860d1147c3ca065e7bd22da5a33b66cd9785d6e3ac247e0c1d9d73138/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:57:49 compute-0 podman[263562]: 2026-01-21 23:57:49.29516229 +0000 UTC m=+0.035645717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffc5c05860d1147c3ca065e7bd22da5a33b66cd9785d6e3ac247e0c1d9d73138/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffc5c05860d1147c3ca065e7bd22da5a33b66cd9785d6e3ac247e0c1d9d73138/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffc5c05860d1147c3ca065e7bd22da5a33b66cd9785d6e3ac247e0c1d9d73138/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:57:49 compute-0 podman[263562]: 2026-01-21 23:57:49.409104653 +0000 UTC m=+0.149588090 container init 92999d393dd3db29f7cd6da6ae6cfa9a7455a0cb2d21c19f03b0ca75438dfb83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:57:49 compute-0 podman[263562]: 2026-01-21 23:57:49.41971221 +0000 UTC m=+0.160195647 container start 92999d393dd3db29f7cd6da6ae6cfa9a7455a0cb2d21c19f03b0ca75438dfb83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:57:49 compute-0 podman[263562]: 2026-01-21 23:57:49.424156596 +0000 UTC m=+0.164640053 container attach 92999d393dd3db29f7cd6da6ae6cfa9a7455a0cb2d21c19f03b0ca75438dfb83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 23:57:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:49.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:50 compute-0 sudo[263586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:57:50 compute-0 sudo[263586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:50 compute-0 sudo[263586]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:50 compute-0 sudo[263619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:57:50 compute-0 sudo[263619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:50 compute-0 sudo[263619]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:50 compute-0 practical_leakey[263578]: {
Jan 21 23:57:50 compute-0 practical_leakey[263578]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:57:50 compute-0 practical_leakey[263578]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:57:50 compute-0 practical_leakey[263578]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:57:50 compute-0 practical_leakey[263578]:         "osd_id": 1,
Jan 21 23:57:50 compute-0 practical_leakey[263578]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:57:50 compute-0 practical_leakey[263578]:         "type": "bluestore"
Jan 21 23:57:50 compute-0 practical_leakey[263578]:     }
Jan 21 23:57:50 compute-0 practical_leakey[263578]: }
Jan 21 23:57:50 compute-0 systemd[1]: libpod-92999d393dd3db29f7cd6da6ae6cfa9a7455a0cb2d21c19f03b0ca75438dfb83.scope: Deactivated successfully.
Jan 21 23:57:50 compute-0 podman[263562]: 2026-01-21 23:57:50.304790006 +0000 UTC m=+1.045273423 container died 92999d393dd3db29f7cd6da6ae6cfa9a7455a0cb2d21c19f03b0ca75438dfb83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_leakey, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 23:57:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffc5c05860d1147c3ca065e7bd22da5a33b66cd9785d6e3ac247e0c1d9d73138-merged.mount: Deactivated successfully.
Jan 21 23:57:50 compute-0 podman[263562]: 2026-01-21 23:57:50.3680328 +0000 UTC m=+1.108516217 container remove 92999d393dd3db29f7cd6da6ae6cfa9a7455a0cb2d21c19f03b0ca75438dfb83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 21 23:57:50 compute-0 systemd[1]: libpod-conmon-92999d393dd3db29f7cd6da6ae6cfa9a7455a0cb2d21c19f03b0ca75438dfb83.scope: Deactivated successfully.
Jan 21 23:57:50 compute-0 sudo[263455]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:57:50 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:57:50 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:57:50 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:57:50 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev cb5e1dfd-4d66-48bd-afe7-d0b81009dabf does not exist
Jan 21 23:57:50 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 754590dc-e9c6-486c-bab6-ee4b2c5ff225 does not exist
Jan 21 23:57:50 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 0d48c472-8009-458f-bda1-a63060b29a1f does not exist
Jan 21 23:57:50 compute-0 sudo[263664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:57:50 compute-0 sudo[263664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:50 compute-0 sudo[263664]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:50 compute-0 sudo[263689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:57:50 compute-0 sudo[263689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:57:50 compute-0 sudo[263689]: pam_unix(sudo:session): session closed for user root
Jan 21 23:57:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:50.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:51 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:57:51 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:57:51 compute-0 ceph-mon[74318]: pgmap v1183: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:51.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:52.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:52 compute-0 ceph-mon[74318]: pgmap v1184: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:57:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:57:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:53.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:57:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:54.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:54 compute-0 ceph-mon[74318]: pgmap v1185: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:55.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:56.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:56 compute-0 ceph-mon[74318]: pgmap v1186: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:57.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:57:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:57:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:57:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:57:58.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:57:58 compute-0 ceph-mon[74318]: pgmap v1187: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:57:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:57:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:57:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:57:59.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:00.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:00 compute-0 ceph-mon[74318]: pgmap v1188: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:01.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:02.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:02 compute-0 ceph-mon[74318]: pgmap v1189: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:58:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:03.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:04.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:04 compute-0 ceph-mon[74318]: pgmap v1190: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:05.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:06 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:58:06.356 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 23:58:06 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:58:06.359 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 23:58:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:06.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:06 compute-0 ceph-mon[74318]: pgmap v1191: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:07.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:58:08 compute-0 podman[263723]: 2026-01-21 23:58:08.017533888 +0000 UTC m=+0.124306934 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 21 23:58:08 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:58:08.361 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 23:58:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:08.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:08 compute-0 ceph-mon[74318]: pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:58:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:58:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:58:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:58:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:58:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:58:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:09.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:10 compute-0 sudo[263754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:58:10 compute-0 sudo[263754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:10 compute-0 sudo[263754]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:10 compute-0 sudo[263779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:58:10 compute-0 sudo[263779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:10 compute-0 sudo[263779]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:10.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:10 compute-0 ceph-mon[74318]: pgmap v1193: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:11.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:11 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2657720575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:58:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:12.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:12 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/814986475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:58:12 compute-0 ceph-mon[74318]: pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:58:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:13.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:13 compute-0 nova_compute[247516]: 2026-01-21 23:58:13.607 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:58:13 compute-0 nova_compute[247516]: 2026-01-21 23:58:13.608 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 23:58:13 compute-0 podman[263806]: 2026-01-21 23:58:13.985050214 +0000 UTC m=+0.091296108 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 21 23:58:14 compute-0 sshd-session[263751]: Connection reset by 198.235.24.207 port 65158 [preauth]
Jan 21 23:58:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:14 compute-0 ceph-mon[74318]: pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:14.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:15.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:15 compute-0 nova_compute[247516]: 2026-01-21 23:58:15.988 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:58:15 compute-0 nova_compute[247516]: 2026-01-21 23:58:15.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:58:15 compute-0 nova_compute[247516]: 2026-01-21 23:58:15.991 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 23:58:15 compute-0 nova_compute[247516]: 2026-01-21 23:58:15.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 23:58:16 compute-0 nova_compute[247516]: 2026-01-21 23:58:16.020 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 23:58:16 compute-0 nova_compute[247516]: 2026-01-21 23:58:16.020 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:58:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:16 compute-0 ceph-mon[74318]: pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:16.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:17.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:17 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1994733952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:58:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:58:17 compute-0 nova_compute[247516]: 2026-01-21 23:58:17.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:58:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:18.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:18 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1631678065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:58:18 compute-0 ceph-mon[74318]: pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:18 compute-0 nova_compute[247516]: 2026-01-21 23:58:18.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:58:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:58:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:19.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:58:19 compute-0 nova_compute[247516]: 2026-01-21 23:58:19.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:58:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:20 compute-0 ceph-mon[74318]: pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:20.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:20 compute-0 nova_compute[247516]: 2026-01-21 23:58:20.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:58:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:21.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:21 compute-0 nova_compute[247516]: 2026-01-21 23:58:21.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:58:22 compute-0 nova_compute[247516]: 2026-01-21 23:58:22.020 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:58:22 compute-0 nova_compute[247516]: 2026-01-21 23:58:22.020 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:58:22 compute-0 nova_compute[247516]: 2026-01-21 23:58:22.021 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:58:22 compute-0 nova_compute[247516]: 2026-01-21 23:58:22.021 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 23:58:22 compute-0 nova_compute[247516]: 2026-01-21 23:58:22.022 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:58:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:58:22 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1393886865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:58:22 compute-0 nova_compute[247516]: 2026-01-21 23:58:22.554 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:58:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1393886865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:58:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:22.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:22 compute-0 nova_compute[247516]: 2026-01-21 23:58:22.824 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 23:58:22 compute-0 nova_compute[247516]: 2026-01-21 23:58:22.825 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5188MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 23:58:22 compute-0 nova_compute[247516]: 2026-01-21 23:58:22.826 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:58:22 compute-0 nova_compute[247516]: 2026-01-21 23:58:22.826 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:58:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:58:22 compute-0 nova_compute[247516]: 2026-01-21 23:58:22.932 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 21 23:58:22 compute-0 nova_compute[247516]: 2026-01-21 23:58:22.933 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 23:58:22 compute-0 nova_compute[247516]: 2026-01-21 23:58:22.934 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 23:58:22 compute-0 nova_compute[247516]: 2026-01-21 23:58:22.984 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:58:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:58:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4250578073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:58:23 compute-0 nova_compute[247516]: 2026-01-21 23:58:23.486 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:58:23 compute-0 nova_compute[247516]: 2026-01-21 23:58:23.496 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 23:58:23 compute-0 nova_compute[247516]: 2026-01-21 23:58:23.528 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 23:58:23 compute-0 nova_compute[247516]: 2026-01-21 23:58:23.530 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 23:58:23 compute-0 nova_compute[247516]: 2026-01-21 23:58:23.530 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:58:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:58:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:23.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:58:23 compute-0 ceph-mon[74318]: pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/4250578073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:58:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:24 compute-0 ceph-mon[74318]: pgmap v1200: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:24.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:25.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 21 23:58:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2119440164' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:58:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 21 23:58:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2119440164' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:58:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2119440164' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:58:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2119440164' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:58:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:26.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:26 compute-0 ceph-mon[74318]: pgmap v1201: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:27.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:58:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:28.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:28 compute-0 ceph-mon[74318]: pgmap v1202: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:58:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:29.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:30 compute-0 sudo[263878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:58:30 compute-0 sudo[263878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:30 compute-0 sudo[263878]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:30 compute-0 sudo[263903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:58:30 compute-0 sudo[263903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:30 compute-0 sudo[263903]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 21 23:58:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:30.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:30 compute-0 ceph-mon[74318]: pgmap v1203: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 21 23:58:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:31.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 21 23:58:32 compute-0 ceph-mon[74318]: pgmap v1204: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 21 23:58:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:32.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:58:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:33.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 21 23:58:34 compute-0 ceph-mon[74318]: pgmap v1205: 305 pgs: 305 active+clean; 41 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 21 23:58:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:34.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:35.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 58 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 526 KiB/s wr, 13 op/s
Jan 21 23:58:36 compute-0 ceph-mon[74318]: pgmap v1206: 305 pgs: 305 active+clean; 58 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 526 KiB/s wr, 13 op/s
Jan 21 23:58:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:36.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:37.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:58:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 58 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 526 KiB/s wr, 13 op/s
Jan 21 23:58:38 compute-0 ceph-mon[74318]: pgmap v1207: 305 pgs: 305 active+clean; 58 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 526 KiB/s wr, 13 op/s
Jan 21 23:58:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:38.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:38 compute-0 podman[263932]: 2026-01-21 23:58:38.977325807 +0000 UTC m=+0.094293852 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-21_23:58:39
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', '.mgr', 'volumes', 'vms']
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:58:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:58:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:39.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 88 MiB data, 257 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 21 23:58:40 compute-0 ceph-mon[74318]: pgmap v1208: 305 pgs: 305 active+clean; 88 MiB data, 257 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 21 23:58:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:40.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:41.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 21 23:58:42 compute-0 ceph-mon[74318]: pgmap v1209: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 21 23:58:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:42.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:58:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:43.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 21 23:58:44 compute-0 ceph-mon[74318]: pgmap v1210: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 21 23:58:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:44.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:44 compute-0 podman[263964]: 2026-01-21 23:58:44.981733096 +0000 UTC m=+0.089860244 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 23:58:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 21 23:58:45 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1512621914' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:58:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 21 23:58:45 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1512621914' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:58:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:45.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:45 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1512621914' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:58:45 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1512621914' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:58:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 71 MiB data, 275 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 21 23:58:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:46.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:46 compute-0 ceph-mon[74318]: pgmap v1211: 305 pgs: 305 active+clean; 71 MiB data, 275 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 21 23:58:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:47.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:58:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 71 MiB data, 275 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.3 MiB/s wr, 31 op/s
Jan 21 23:58:48 compute-0 ceph-mon[74318]: pgmap v1212: 305 pgs: 305 active+clean; 71 MiB data, 275 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.3 MiB/s wr, 31 op/s
Jan 21 23:58:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:48.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:58:48.756 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:58:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:58:48.757 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:58:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:58:48.757 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:58:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:49.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 1.3 MiB/s wr, 43 op/s
Jan 21 23:58:50 compute-0 ceph-mon[74318]: pgmap v1213: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 1.3 MiB/s wr, 43 op/s
Jan 21 23:58:50 compute-0 sudo[263987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:58:50 compute-0 sudo[263987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:50 compute-0 sudo[263987]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:50.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:50 compute-0 sudo[264012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:58:50 compute-0 sudo[264012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:50 compute-0 sudo[264012]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:51 compute-0 sudo[264037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:58:51 compute-0 sudo[264037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:51 compute-0 sudo[264037]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:51 compute-0 sudo[264062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:58:51 compute-0 sudo[264062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:51 compute-0 sudo[264062]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:51 compute-0 sudo[264087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:58:51 compute-0 sudo[264087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:51 compute-0 sudo[264087]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:51 compute-0 sudo[264112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:58:51 compute-0 sudo[264112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:51.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:51 compute-0 sudo[264112]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:58:51 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:58:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 21 23:58:51 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:58:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 21 23:58:51 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:58:51 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 11bca373-cfe7-40af-8eaf-fbb875ec6f38 does not exist
Jan 21 23:58:51 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 868c4328-327b-49f2-98f0-77ff752b843e does not exist
Jan 21 23:58:51 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 57fd9547-2895-4fce-940c-9f3bf69e2f4b does not exist
Jan 21 23:58:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 21 23:58:51 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:58:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 21 23:58:51 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:58:51 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 21 23:58:51 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:58:52 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:58:52 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 21 23:58:52 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:58:52 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 21 23:58:52 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 21 23:58:52 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 21 23:58:52 compute-0 sudo[264170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:58:52 compute-0 sudo[264170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:52 compute-0 sudo[264170]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:52 compute-0 sudo[264195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:58:52 compute-0 sudo[264195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:52 compute-0 sudo[264195]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:52 compute-0 sudo[264220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:58:52 compute-0 sudo[264220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:52 compute-0 sudo[264220]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:52 compute-0 sudo[264245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 21 23:58:52 compute-0 sudo[264245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 21 23:58:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:52.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:52 compute-0 podman[264310]: 2026-01-21 23:58:52.78267246 +0000 UTC m=+0.074200224 container create 408c26ecddb04364866668e9a4e38c331ea88f722cb930783fd36828a15a266c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 21 23:58:52 compute-0 systemd[1]: Started libpod-conmon-408c26ecddb04364866668e9a4e38c331ea88f722cb930783fd36828a15a266c.scope.
Jan 21 23:58:52 compute-0 podman[264310]: 2026-01-21 23:58:52.748487678 +0000 UTC m=+0.040015502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:58:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:58:52 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:58:52 compute-0 podman[264310]: 2026-01-21 23:58:52.882110297 +0000 UTC m=+0.173638061 container init 408c26ecddb04364866668e9a4e38c331ea88f722cb930783fd36828a15a266c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 21 23:58:52 compute-0 podman[264310]: 2026-01-21 23:58:52.89391431 +0000 UTC m=+0.185442074 container start 408c26ecddb04364866668e9a4e38c331ea88f722cb930783fd36828a15a266c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 23:58:52 compute-0 podman[264310]: 2026-01-21 23:58:52.898278794 +0000 UTC m=+0.189806528 container attach 408c26ecddb04364866668e9a4e38c331ea88f722cb930783fd36828a15a266c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 23:58:52 compute-0 musing_pasteur[264326]: 167 167
Jan 21 23:58:52 compute-0 systemd[1]: libpod-408c26ecddb04364866668e9a4e38c331ea88f722cb930783fd36828a15a266c.scope: Deactivated successfully.
Jan 21 23:58:52 compute-0 podman[264310]: 2026-01-21 23:58:52.899622395 +0000 UTC m=+0.191150159 container died 408c26ecddb04364866668e9a4e38c331ea88f722cb930783fd36828a15a266c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:58:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b4d917ac2c4a70f062f3755126d252e917782e8b99d7006802f8e32f346a330-merged.mount: Deactivated successfully.
Jan 21 23:58:52 compute-0 podman[264310]: 2026-01-21 23:58:52.954780881 +0000 UTC m=+0.246308605 container remove 408c26ecddb04364866668e9a4e38c331ea88f722cb930783fd36828a15a266c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:58:52 compute-0 systemd[1]: libpod-conmon-408c26ecddb04364866668e9a4e38c331ea88f722cb930783fd36828a15a266c.scope: Deactivated successfully.
Jan 21 23:58:53 compute-0 ceph-mon[74318]: pgmap v1214: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 21 23:58:53 compute-0 podman[264350]: 2026-01-21 23:58:53.204313334 +0000 UTC m=+0.067322081 container create ed5cb99a3c159aeb8226b73edef84d8edbbba2a09e4954a08c816dde5872d110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 23:58:53 compute-0 systemd[1]: Started libpod-conmon-ed5cb99a3c159aeb8226b73edef84d8edbbba2a09e4954a08c816dde5872d110.scope.
Jan 21 23:58:53 compute-0 podman[264350]: 2026-01-21 23:58:53.179979626 +0000 UTC m=+0.042988373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:58:53 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1af95d537826ec84c85f9d298fb5f0792c120ce6f701c9de9b39e15a3a3f86af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1af95d537826ec84c85f9d298fb5f0792c120ce6f701c9de9b39e15a3a3f86af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1af95d537826ec84c85f9d298fb5f0792c120ce6f701c9de9b39e15a3a3f86af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1af95d537826ec84c85f9d298fb5f0792c120ce6f701c9de9b39e15a3a3f86af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1af95d537826ec84c85f9d298fb5f0792c120ce6f701c9de9b39e15a3a3f86af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 23:58:53 compute-0 podman[264350]: 2026-01-21 23:58:53.316209925 +0000 UTC m=+0.179218692 container init ed5cb99a3c159aeb8226b73edef84d8edbbba2a09e4954a08c816dde5872d110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:58:53 compute-0 podman[264350]: 2026-01-21 23:58:53.338587753 +0000 UTC m=+0.201596460 container start ed5cb99a3c159aeb8226b73edef84d8edbbba2a09e4954a08c816dde5872d110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_tesla, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 23:58:53 compute-0 podman[264350]: 2026-01-21 23:58:53.341945587 +0000 UTC m=+0.204954324 container attach ed5cb99a3c159aeb8226b73edef84d8edbbba2a09e4954a08c816dde5872d110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_tesla, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 23:58:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:58:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:53.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:58:54 compute-0 practical_tesla[264366]: --> passed data devices: 0 physical, 1 LVM
Jan 21 23:58:54 compute-0 practical_tesla[264366]: --> relative data size: 1.0
Jan 21 23:58:54 compute-0 practical_tesla[264366]: --> All data devices are unavailable
Jan 21 23:58:54 compute-0 systemd[1]: libpod-ed5cb99a3c159aeb8226b73edef84d8edbbba2a09e4954a08c816dde5872d110.scope: Deactivated successfully.
Jan 21 23:58:54 compute-0 podman[264350]: 2026-01-21 23:58:54.223765712 +0000 UTC m=+1.086774449 container died ed5cb99a3c159aeb8226b73edef84d8edbbba2a09e4954a08c816dde5872d110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_tesla, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 23:58:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-1af95d537826ec84c85f9d298fb5f0792c120ce6f701c9de9b39e15a3a3f86af-merged.mount: Deactivated successfully.
Jan 21 23:58:54 compute-0 podman[264350]: 2026-01-21 23:58:54.297161139 +0000 UTC m=+1.160169886 container remove ed5cb99a3c159aeb8226b73edef84d8edbbba2a09e4954a08c816dde5872d110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_tesla, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 23:58:54 compute-0 systemd[1]: libpod-conmon-ed5cb99a3c159aeb8226b73edef84d8edbbba2a09e4954a08c816dde5872d110.scope: Deactivated successfully.
Jan 21 23:58:54 compute-0 sudo[264245]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:58:54 compute-0 sudo[264394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:58:54 compute-0 sudo[264394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:54 compute-0 sudo[264394]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:54 compute-0 sudo[264419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:58:54 compute-0 sudo[264419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:54 compute-0 sudo[264419]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:54 compute-0 sudo[264444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:58:54 compute-0 sudo[264444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:54 compute-0 sudo[264444]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 21 23:58:54 compute-0 sudo[264469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 21 23:58:54 compute-0 sudo[264469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:54 compute-0 ceph-mon[74318]: pgmap v1215: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 21 23:58:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:54.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:55 compute-0 podman[264534]: 2026-01-21 23:58:55.154383047 +0000 UTC m=+0.068583589 container create f72c163d04aba45ff7fbdb9afc793d9753fee0073150ee10ab8090295f99ad32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hypatia, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 23:58:55 compute-0 systemd[1]: Started libpod-conmon-f72c163d04aba45ff7fbdb9afc793d9753fee0073150ee10ab8090295f99ad32.scope.
Jan 21 23:58:55 compute-0 podman[264534]: 2026-01-21 23:58:55.126065437 +0000 UTC m=+0.040265999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:58:55 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:58:55 compute-0 podman[264534]: 2026-01-21 23:58:55.261225322 +0000 UTC m=+0.175425884 container init f72c163d04aba45ff7fbdb9afc793d9753fee0073150ee10ab8090295f99ad32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hypatia, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 21 23:58:55 compute-0 podman[264534]: 2026-01-21 23:58:55.27151886 +0000 UTC m=+0.185719392 container start f72c163d04aba45ff7fbdb9afc793d9753fee0073150ee10ab8090295f99ad32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:58:55 compute-0 podman[264534]: 2026-01-21 23:58:55.275622335 +0000 UTC m=+0.189822897 container attach f72c163d04aba45ff7fbdb9afc793d9753fee0073150ee10ab8090295f99ad32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hypatia, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Jan 21 23:58:55 compute-0 romantic_hypatia[264550]: 167 167
Jan 21 23:58:55 compute-0 systemd[1]: libpod-f72c163d04aba45ff7fbdb9afc793d9753fee0073150ee10ab8090295f99ad32.scope: Deactivated successfully.
Jan 21 23:58:55 compute-0 podman[264534]: 2026-01-21 23:58:55.280517105 +0000 UTC m=+0.194717668 container died f72c163d04aba45ff7fbdb9afc793d9753fee0073150ee10ab8090295f99ad32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hypatia, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:58:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d2e98d13f37c6c7f1f0ea01ee8e0169c8ab5812888e8af2a5249a8115519a0a-merged.mount: Deactivated successfully.
Jan 21 23:58:55 compute-0 podman[264534]: 2026-01-21 23:58:55.32587148 +0000 UTC m=+0.240072002 container remove f72c163d04aba45ff7fbdb9afc793d9753fee0073150ee10ab8090295f99ad32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hypatia, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:58:55 compute-0 systemd[1]: libpod-conmon-f72c163d04aba45ff7fbdb9afc793d9753fee0073150ee10ab8090295f99ad32.scope: Deactivated successfully.
Jan 21 23:58:55 compute-0 podman[264574]: 2026-01-21 23:58:55.522111635 +0000 UTC m=+0.040564028 container create 8c0026d1cc62b322ad32ac8140f96d196b2feb560eed8108a06a72a9e84c5fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 23:58:55 compute-0 systemd[1]: Started libpod-conmon-8c0026d1cc62b322ad32ac8140f96d196b2feb560eed8108a06a72a9e84c5fc5.scope.
Jan 21 23:58:55 compute-0 podman[264574]: 2026-01-21 23:58:55.501661966 +0000 UTC m=+0.020114379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:58:55 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2243de336efc8e2a70ce3236c789561698d0cb9930f1a2a6b81e2cdf6e3af3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2243de336efc8e2a70ce3236c789561698d0cb9930f1a2a6b81e2cdf6e3af3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2243de336efc8e2a70ce3236c789561698d0cb9930f1a2a6b81e2cdf6e3af3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2243de336efc8e2a70ce3236c789561698d0cb9930f1a2a6b81e2cdf6e3af3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:58:55 compute-0 podman[264574]: 2026-01-21 23:58:55.635475271 +0000 UTC m=+0.153927734 container init 8c0026d1cc62b322ad32ac8140f96d196b2feb560eed8108a06a72a9e84c5fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 21 23:58:55 compute-0 podman[264574]: 2026-01-21 23:58:55.642859938 +0000 UTC m=+0.161312381 container start 8c0026d1cc62b322ad32ac8140f96d196b2feb560eed8108a06a72a9e84c5fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 23:58:55 compute-0 podman[264574]: 2026-01-21 23:58:55.647128828 +0000 UTC m=+0.165581322 container attach 8c0026d1cc62b322ad32ac8140f96d196b2feb560eed8108a06a72a9e84c5fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 23:58:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:55.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]: {
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:     "1": [
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:         {
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:             "devices": [
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:                 "/dev/loop3"
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:             ],
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:             "lv_name": "ceph_lv0",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:             "lv_size": "7511998464",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:             "name": "ceph_lv0",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:             "tags": {
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:                 "ceph.cluster_name": "ceph",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:                 "ceph.crush_device_class": "",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:                 "ceph.encrypted": "0",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:                 "ceph.osd_id": "1",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:                 "ceph.type": "block",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:                 "ceph.vdo": "0"
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:             },
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:             "type": "block",
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:             "vg_name": "ceph_vg0"
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:         }
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]:     ]
Jan 21 23:58:56 compute-0 sleepy_pascal[264591]: }
Jan 21 23:58:56 compute-0 systemd[1]: libpod-8c0026d1cc62b322ad32ac8140f96d196b2feb560eed8108a06a72a9e84c5fc5.scope: Deactivated successfully.
Jan 21 23:58:56 compute-0 podman[264574]: 2026-01-21 23:58:56.481460594 +0000 UTC m=+0.999913047 container died 8c0026d1cc62b322ad32ac8140f96d196b2feb560eed8108a06a72a9e84c5fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:58:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-da2243de336efc8e2a70ce3236c789561698d0cb9930f1a2a6b81e2cdf6e3af3-merged.mount: Deactivated successfully.
Jan 21 23:58:56 compute-0 podman[264574]: 2026-01-21 23:58:56.569812511 +0000 UTC m=+1.088264934 container remove 8c0026d1cc62b322ad32ac8140f96d196b2feb560eed8108a06a72a9e84c5fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 21 23:58:56 compute-0 systemd[1]: libpod-conmon-8c0026d1cc62b322ad32ac8140f96d196b2feb560eed8108a06a72a9e84c5fc5.scope: Deactivated successfully.
Jan 21 23:58:56 compute-0 sudo[264469]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 15 op/s
Jan 21 23:58:56 compute-0 ceph-mon[74318]: pgmap v1216: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 15 op/s
Jan 21 23:58:56 compute-0 sudo[264612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:58:56 compute-0 sudo[264612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:56 compute-0 sudo[264612]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:56.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:56 compute-0 sudo[264637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:58:56 compute-0 sudo[264637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:56 compute-0 sudo[264637]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:56 compute-0 sudo[264662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:58:56 compute-0 sudo[264662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:56 compute-0 sudo[264662]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:56 compute-0 sudo[264687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 21 23:58:56 compute-0 sudo[264687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:57 compute-0 podman[264752]: 2026-01-21 23:58:57.423266424 +0000 UTC m=+0.062502133 container create 96a008130673ccfafa589af54411d460e0a2d81eb97b41ab7b74e58cba7899f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 23:58:57 compute-0 systemd[1]: Started libpod-conmon-96a008130673ccfafa589af54411d460e0a2d81eb97b41ab7b74e58cba7899f7.scope.
Jan 21 23:58:57 compute-0 podman[264752]: 2026-01-21 23:58:57.398376959 +0000 UTC m=+0.037612728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:58:57 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:58:57 compute-0 podman[264752]: 2026-01-21 23:58:57.532273906 +0000 UTC m=+0.171509605 container init 96a008130673ccfafa589af54411d460e0a2d81eb97b41ab7b74e58cba7899f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 21 23:58:57 compute-0 podman[264752]: 2026-01-21 23:58:57.542682205 +0000 UTC m=+0.181917884 container start 96a008130673ccfafa589af54411d460e0a2d81eb97b41ab7b74e58cba7899f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 21 23:58:57 compute-0 podman[264752]: 2026-01-21 23:58:57.545653357 +0000 UTC m=+0.184889066 container attach 96a008130673ccfafa589af54411d460e0a2d81eb97b41ab7b74e58cba7899f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 23:58:57 compute-0 cool_pare[264770]: 167 167
Jan 21 23:58:57 compute-0 systemd[1]: libpod-96a008130673ccfafa589af54411d460e0a2d81eb97b41ab7b74e58cba7899f7.scope: Deactivated successfully.
Jan 21 23:58:57 compute-0 podman[264752]: 2026-01-21 23:58:57.55063314 +0000 UTC m=+0.189868859 container died 96a008130673ccfafa589af54411d460e0a2d81eb97b41ab7b74e58cba7899f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 21 23:58:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb05728e92c2beca06b91e66be1363c4bb2f386e515a23f86093d2f44e23db50-merged.mount: Deactivated successfully.
Jan 21 23:58:57 compute-0 podman[264752]: 2026-01-21 23:58:57.614807624 +0000 UTC m=+0.254043343 container remove 96a008130673ccfafa589af54411d460e0a2d81eb97b41ab7b74e58cba7899f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 21 23:58:57 compute-0 systemd[1]: libpod-conmon-96a008130673ccfafa589af54411d460e0a2d81eb97b41ab7b74e58cba7899f7.scope: Deactivated successfully.
Jan 21 23:58:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:57.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:58:57 compute-0 podman[264795]: 2026-01-21 23:58:57.837584974 +0000 UTC m=+0.053755134 container create 49eaa1de7481612f7e9220b0cf3c7d7e1eaa8259c5cce19b8cd80200d8dfd71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mestorf, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 23:58:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:58:57 compute-0 systemd[1]: Started libpod-conmon-49eaa1de7481612f7e9220b0cf3c7d7e1eaa8259c5cce19b8cd80200d8dfd71e.scope.
Jan 21 23:58:57 compute-0 podman[264795]: 2026-01-21 23:58:57.81827815 +0000 UTC m=+0.034448330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 21 23:58:57 compute-0 systemd[1]: Started libcrun container.
Jan 21 23:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baac6022b9152c5e4e4137ef6a2d5f8b9fa01cec870e35f98859446d843c9472/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 23:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baac6022b9152c5e4e4137ef6a2d5f8b9fa01cec870e35f98859446d843c9472/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 23:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baac6022b9152c5e4e4137ef6a2d5f8b9fa01cec870e35f98859446d843c9472/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 23:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baac6022b9152c5e4e4137ef6a2d5f8b9fa01cec870e35f98859446d843c9472/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 23:58:57 compute-0 podman[264795]: 2026-01-21 23:58:57.95487092 +0000 UTC m=+0.171041170 container init 49eaa1de7481612f7e9220b0cf3c7d7e1eaa8259c5cce19b8cd80200d8dfd71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 23:58:57 compute-0 podman[264795]: 2026-01-21 23:58:57.971651926 +0000 UTC m=+0.187822116 container start 49eaa1de7481612f7e9220b0cf3c7d7e1eaa8259c5cce19b8cd80200d8dfd71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 21 23:58:57 compute-0 podman[264795]: 2026-01-21 23:58:57.976518485 +0000 UTC m=+0.192688685 container attach 49eaa1de7481612f7e9220b0cf3c7d7e1eaa8259c5cce19b8cd80200d8dfd71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mestorf, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 23:58:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 596 B/s wr, 12 op/s
Jan 21 23:58:58 compute-0 ceph-mon[74318]: pgmap v1217: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 596 B/s wr, 12 op/s
Jan 21 23:58:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:58:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:58:58.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:58:58 compute-0 youthful_mestorf[264811]: {
Jan 21 23:58:58 compute-0 youthful_mestorf[264811]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 21 23:58:58 compute-0 youthful_mestorf[264811]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 21 23:58:58 compute-0 youthful_mestorf[264811]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 21 23:58:58 compute-0 youthful_mestorf[264811]:         "osd_id": 1,
Jan 21 23:58:58 compute-0 youthful_mestorf[264811]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 21 23:58:58 compute-0 youthful_mestorf[264811]:         "type": "bluestore"
Jan 21 23:58:58 compute-0 youthful_mestorf[264811]:     }
Jan 21 23:58:58 compute-0 youthful_mestorf[264811]: }
Jan 21 23:58:58 compute-0 systemd[1]: libpod-49eaa1de7481612f7e9220b0cf3c7d7e1eaa8259c5cce19b8cd80200d8dfd71e.scope: Deactivated successfully.
Jan 21 23:58:58 compute-0 podman[264795]: 2026-01-21 23:58:58.925108035 +0000 UTC m=+1.141278235 container died 49eaa1de7481612f7e9220b0cf3c7d7e1eaa8259c5cce19b8cd80200d8dfd71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 23:58:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-baac6022b9152c5e4e4137ef6a2d5f8b9fa01cec870e35f98859446d843c9472-merged.mount: Deactivated successfully.
Jan 21 23:58:58 compute-0 podman[264795]: 2026-01-21 23:58:58.988935047 +0000 UTC m=+1.205105197 container remove 49eaa1de7481612f7e9220b0cf3c7d7e1eaa8259c5cce19b8cd80200d8dfd71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 21 23:58:59 compute-0 systemd[1]: libpod-conmon-49eaa1de7481612f7e9220b0cf3c7d7e1eaa8259c5cce19b8cd80200d8dfd71e.scope: Deactivated successfully.
Jan 21 23:58:59 compute-0 sudo[264687]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 21 23:58:59 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:58:59 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 21 23:58:59 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:58:59 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 903a946a-a38c-45a0-ae5f-6c4c9a870992 does not exist
Jan 21 23:58:59 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev e059e4e0-6869-44da-9f90-13e8cb218022 does not exist
Jan 21 23:58:59 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 36db1d89-c06a-4af6-a151-29323e69f5f6 does not exist
Jan 21 23:58:59 compute-0 sudo[264843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:58:59 compute-0 sudo[264843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:59 compute-0 sudo[264843]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:59 compute-0 sudo[264868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 23:58:59 compute-0 sudo[264868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:58:59 compute-0 sudo[264868]: pam_unix(sudo:session): session closed for user root
Jan 21 23:58:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:58:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:58:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:58:59.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:00 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:59:00 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 21 23:59:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 596 B/s wr, 12 op/s
Jan 21 23:59:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:00.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:01 compute-0 ceph-mon[74318]: pgmap v1218: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 596 B/s wr, 12 op/s
Jan 21 23:59:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:01.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:02 compute-0 ceph-mon[74318]: pgmap v1219: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:02.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:59:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:03.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:04 compute-0 ceph-mon[74318]: pgmap v1220: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:04.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:05.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:06 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:59:06.458 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 23:59:06 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:59:06.461 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 23:59:06 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:59:06.463 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 23:59:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:06 compute-0 ceph-mon[74318]: pgmap v1221: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:06.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:07.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:59:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:08 compute-0 ceph-mon[74318]: pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:08.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:59:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:59:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:59:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:59:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:59:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:59:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:09.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:10 compute-0 podman[264899]: 2026-01-21 23:59:10.080441922 +0000 UTC m=+0.182649707 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 23:59:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:10 compute-0 ceph-mon[74318]: pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:10.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:10 compute-0 sudo[264925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:59:10 compute-0 sudo[264925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:59:10 compute-0 sudo[264925]: pam_unix(sudo:session): session closed for user root
Jan 21 23:59:10 compute-0 sudo[264950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:59:10 compute-0 sudo[264950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:59:10 compute-0 sudo[264950]: pam_unix(sudo:session): session closed for user root
Jan 21 23:59:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:11.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:12 compute-0 ceph-mon[74318]: pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:12.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:59:13 compute-0 nova_compute[247516]: 2026-01-21 23:59:13.532 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:59:13 compute-0 nova_compute[247516]: 2026-01-21 23:59:13.533 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 23:59:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:13.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:13 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/419746173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:59:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:14 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1117289711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:59:14 compute-0 ceph-mon[74318]: pgmap v1225: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:14.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:15.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:15 compute-0 podman[264978]: 2026-01-21 23:59:15.978832993 +0000 UTC m=+0.082078915 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 21 23:59:15 compute-0 nova_compute[247516]: 2026-01-21 23:59:15.988 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:59:15 compute-0 nova_compute[247516]: 2026-01-21 23:59:15.990 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:59:15 compute-0 nova_compute[247516]: 2026-01-21 23:59:15.990 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 23:59:15 compute-0 nova_compute[247516]: 2026-01-21 23:59:15.990 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 23:59:16 compute-0 nova_compute[247516]: 2026-01-21 23:59:16.003 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 23:59:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:16 compute-0 ceph-mon[74318]: pgmap v1226: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:16.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:17 compute-0 nova_compute[247516]: 2026-01-21 23:59:17.000 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:59:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:17.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:17 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1074855222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:59:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:59:17 compute-0 nova_compute[247516]: 2026-01-21 23:59:17.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:59:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:18 compute-0 ceph-mon[74318]: pgmap v1227: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:18.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:18 compute-0 nova_compute[247516]: 2026-01-21 23:59:18.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:59:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:19.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:19 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/4213679213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:59:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:20 compute-0 ceph-mon[74318]: pgmap v1228: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:20.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:20 compute-0 nova_compute[247516]: 2026-01-21 23:59:20.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:59:20 compute-0 nova_compute[247516]: 2026-01-21 23:59:20.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:59:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:21.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:22 compute-0 ceph-mon[74318]: pgmap v1229: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:22.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:59:22 compute-0 nova_compute[247516]: 2026-01-21 23:59:22.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:59:22 compute-0 nova_compute[247516]: 2026-01-21 23:59:22.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 23:59:23 compute-0 nova_compute[247516]: 2026-01-21 23:59:23.018 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:59:23 compute-0 nova_compute[247516]: 2026-01-21 23:59:23.019 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:59:23 compute-0 nova_compute[247516]: 2026-01-21 23:59:23.020 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:59:23 compute-0 nova_compute[247516]: 2026-01-21 23:59:23.020 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 23:59:23 compute-0 nova_compute[247516]: 2026-01-21 23:59:23.021 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:59:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:59:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3203811310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:59:23 compute-0 nova_compute[247516]: 2026-01-21 23:59:23.505 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:59:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:23.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3203811310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:59:23 compute-0 nova_compute[247516]: 2026-01-21 23:59:23.750 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 23:59:23 compute-0 nova_compute[247516]: 2026-01-21 23:59:23.753 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5165MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 23:59:23 compute-0 nova_compute[247516]: 2026-01-21 23:59:23.753 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:59:23 compute-0 nova_compute[247516]: 2026-01-21 23:59:23.754 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:59:23 compute-0 nova_compute[247516]: 2026-01-21 23:59:23.857 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 21 23:59:23 compute-0 nova_compute[247516]: 2026-01-21 23:59:23.858 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 23:59:23 compute-0 nova_compute[247516]: 2026-01-21 23:59:23.858 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 23:59:23 compute-0 nova_compute[247516]: 2026-01-21 23:59:23.900 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 23:59:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 21 23:59:24 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4177160710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:59:24 compute-0 nova_compute[247516]: 2026-01-21 23:59:24.309 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 23:59:24 compute-0 nova_compute[247516]: 2026-01-21 23:59:24.316 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 23:59:24 compute-0 nova_compute[247516]: 2026-01-21 23:59:24.347 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 23:59:24 compute-0 nova_compute[247516]: 2026-01-21 23:59:24.349 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 23:59:24 compute-0 nova_compute[247516]: 2026-01-21 23:59:24.349 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:59:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:24 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/4177160710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 21 23:59:24 compute-0 ceph-mon[74318]: pgmap v1230: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:24.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:25.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2066944795' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:59:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2066944795' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:59:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:26.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:26 compute-0 ceph-mon[74318]: pgmap v1231: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:27.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:59:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:28 compute-0 ceph-mon[74318]: pgmap v1232: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:28.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:29.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:30.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:31 compute-0 ceph-mon[74318]: pgmap v1233: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:31 compute-0 sudo[265048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:59:31 compute-0 sudo[265048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:59:31 compute-0 sudo[265048]: pam_unix(sudo:session): session closed for user root
Jan 21 23:59:31 compute-0 sudo[265073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:59:31 compute-0 sudo[265073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:59:31 compute-0 sudo[265073]: pam_unix(sudo:session): session closed for user root
Jan 21 23:59:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:31.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:32.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:59:33 compute-0 ceph-mon[74318]: pgmap v1234: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:33.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:34.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:35.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:35 compute-0 ceph-mon[74318]: pgmap v1235: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:36.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:37.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:37 compute-0 ceph-mon[74318]: pgmap v1236: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:59:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:38.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:59:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:59:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:59:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:59:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 23:59:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 23:59:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 23:59:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:59:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:59:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:59:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:59:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 23:59:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 23:59:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 23:59:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 23:59:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 23:59:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:39.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:39 compute-0 ceph-mon[74318]: pgmap v1237: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 21 23:59:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 21 23:59:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:40.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:41 compute-0 podman[265103]: 2026-01-21 23:59:41.005352378 +0000 UTC m=+0.108004253 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 23:59:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:41.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:41 compute-0 ceph-mon[74318]: pgmap v1238: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 21 23:59:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Jan 21 23:59:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:42.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:59:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:43.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:43 compute-0 ceph-mon[74318]: pgmap v1239: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Jan 21 23:59:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Jan 21 23:59:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:44.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:45.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:45 compute-0 ceph-mon[74318]: pgmap v1240: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Jan 21 23:59:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 21 23:59:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 21 23:59:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:46.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 21 23:59:46 compute-0 podman[265132]: 2026-01-21 23:59:46.983942503 +0000 UTC m=+0.083400745 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 23:59:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:47.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:59:47 compute-0 ceph-mon[74318]: pgmap v1241: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 21 23:59:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 21 23:59:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:59:48.757 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 23:59:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:59:48.758 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 23:59:48 compute-0 ovn_metadata_agent[159045]: 2026-01-21 23:59:48.759 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 23:59:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:48.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:49.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:49 compute-0 ceph-mon[74318]: pgmap v1242: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 21 23:59:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 21 23:59:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:59:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:50.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:59:50 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1722878513' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:59:50 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1722878513' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:59:51 compute-0 sudo[265153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:59:51 compute-0 sudo[265153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:59:51 compute-0 sudo[265153]: pam_unix(sudo:session): session closed for user root
Jan 21 23:59:51 compute-0 sudo[265178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:59:51 compute-0 sudo[265178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:59:51 compute-0 sudo[265178]: pam_unix(sudo:session): session closed for user root
Jan 21 23:59:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:51.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:51 compute-0 ceph-mon[74318]: pgmap v1243: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 21 23:59:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 81 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 21 23:59:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:52.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:59:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:53.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:53 compute-0 ceph-mon[74318]: pgmap v1244: 305 pgs: 305 active+clean; 81 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0007170182509771077 of space, bias 1.0, pg target 0.21510547529313231 quantized to 32 (current 32)
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 21 23:59:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 81 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 21 23:59:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:54.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:55.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:55 compute-0 ceph-mon[74318]: pgmap v1245: 305 pgs: 305 active+clean; 81 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 21 23:59:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Jan 21 23:59:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:56.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 21 23:59:57 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2370122486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:59:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 21 23:59:57 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2370122486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:59:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:57.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 23:59:57 compute-0 ceph-mon[74318]: pgmap v1246: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Jan 21 23:59:57 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2370122486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 21 23:59:57 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2370122486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 21 23:59:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 938 B/s wr, 16 op/s
Jan 21 23:59:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 21 23:59:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [21/Jan/2026:23:59:58.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 21 23:59:59 compute-0 sudo[265208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:59:59 compute-0 sudo[265208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:59:59 compute-0 sudo[265208]: pam_unix(sudo:session): session closed for user root
Jan 21 23:59:59 compute-0 sudo[265233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 23:59:59 compute-0 sudo[265233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:59:59 compute-0 sudo[265233]: pam_unix(sudo:session): session closed for user root
Jan 21 23:59:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 21 23:59:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 21 23:59:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [21/Jan/2026:23:59:59.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 21 23:59:59 compute-0 sudo[265258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 21 23:59:59 compute-0 sudo[265258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 23:59:59 compute-0 sudo[265258]: pam_unix(sudo:session): session closed for user root
Jan 21 23:59:59 compute-0 sudo[265283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 21 23:59:59 compute-0 sudo[265283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:00 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 22 00:00:00 compute-0 ceph-mon[74318]: pgmap v1247: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 938 B/s wr, 16 op/s
Jan 22 00:00:00 compute-0 sudo[265283]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:00:00 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:00:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:00:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:00:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:00:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:00:00 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 427ad872-8edc-4605-a7b9-e5f00b0e4c38 does not exist
Jan 22 00:00:00 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev a00fee78-d101-45d9-aec7-df89098fb161 does not exist
Jan 22 00:00:00 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 43247268-b3f7-40b4-a3e2-bb44ec7a72bd does not exist
Jan 22 00:00:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:00:00 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:00:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:00:00 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:00:00 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:00:00 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:00:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 22 00:00:00 compute-0 systemd[1]: Starting update of the root trust anchor for DNSSEC validation in unbound...
Jan 22 00:00:00 compute-0 systemd[1]: Starting Rotate log files...
Jan 22 00:00:00 compute-0 sudo[265340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:00:00 compute-0 sudo[265340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:00 compute-0 sudo[265340]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:00 compute-0 systemd[1]: unbound-anchor.service: Deactivated successfully.
Jan 22 00:00:00 compute-0 systemd[1]: Finished update of the root trust anchor for DNSSEC validation in unbound.
Jan 22 00:00:00 compute-0 sudo[265367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:00:00 compute-0 sudo[265367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:00 compute-0 sudo[265367]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:00:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:00.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:00:00 compute-0 systemd[1]: logrotate.service: Deactivated successfully.
Jan 22 00:00:00 compute-0 systemd[1]: Finished Rotate log files.
Jan 22 00:00:00 compute-0 sudo[265392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:00:00 compute-0 sudo[265392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:00 compute-0 sudo[265392]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:00 compute-0 sudo[265419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:00:00 compute-0 sudo[265419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:01 compute-0 ceph-mon[74318]: overall HEALTH_OK
Jan 22 00:00:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:00:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:00:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:00:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:00:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:00:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:00:01 compute-0 podman[265484]: 2026-01-22 00:00:01.315038886 +0000 UTC m=+0.039585267 container create e6ed7317ee0d9fe1786a175c9e3257a1791f3abd9e6831ff7c5bb81965bb9ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cannon, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Jan 22 00:00:01 compute-0 systemd[1]: Started libpod-conmon-e6ed7317ee0d9fe1786a175c9e3257a1791f3abd9e6831ff7c5bb81965bb9ff1.scope.
Jan 22 00:00:01 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:00:01 compute-0 podman[265484]: 2026-01-22 00:00:01.29644384 +0000 UTC m=+0.020990221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:00:01 compute-0 podman[265484]: 2026-01-22 00:00:01.407346513 +0000 UTC m=+0.131892924 container init e6ed7317ee0d9fe1786a175c9e3257a1791f3abd9e6831ff7c5bb81965bb9ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:00:01 compute-0 podman[265484]: 2026-01-22 00:00:01.41824158 +0000 UTC m=+0.142787961 container start e6ed7317ee0d9fe1786a175c9e3257a1791f3abd9e6831ff7c5bb81965bb9ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cannon, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:00:01 compute-0 podman[265484]: 2026-01-22 00:00:01.425303429 +0000 UTC m=+0.149849840 container attach e6ed7317ee0d9fe1786a175c9e3257a1791f3abd9e6831ff7c5bb81965bb9ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 00:00:01 compute-0 sad_cannon[265501]: 167 167
Jan 22 00:00:01 compute-0 systemd[1]: libpod-e6ed7317ee0d9fe1786a175c9e3257a1791f3abd9e6831ff7c5bb81965bb9ff1.scope: Deactivated successfully.
Jan 22 00:00:01 compute-0 conmon[265501]: conmon e6ed7317ee0d9fe1786a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6ed7317ee0d9fe1786a175c9e3257a1791f3abd9e6831ff7c5bb81965bb9ff1.scope/container/memory.events
Jan 22 00:00:01 compute-0 podman[265484]: 2026-01-22 00:00:01.427159616 +0000 UTC m=+0.151706067 container died e6ed7317ee0d9fe1786a175c9e3257a1791f3abd9e6831ff7c5bb81965bb9ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cannon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 22 00:00:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d2ce352fada1d1a993cf57664e1c9d82211c5b6611305c2b1a6cdb7b9125cd9-merged.mount: Deactivated successfully.
Jan 22 00:00:01 compute-0 podman[265484]: 2026-01-22 00:00:01.494831491 +0000 UTC m=+0.219377872 container remove e6ed7317ee0d9fe1786a175c9e3257a1791f3abd9e6831ff7c5bb81965bb9ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cannon, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:00:01 compute-0 systemd[1]: libpod-conmon-e6ed7317ee0d9fe1786a175c9e3257a1791f3abd9e6831ff7c5bb81965bb9ff1.scope: Deactivated successfully.
Jan 22 00:00:01 compute-0 podman[265527]: 2026-01-22 00:00:01.674351499 +0000 UTC m=+0.052476147 container create d168f02594774f5c429c694251efc8dc2051bdd507388aee9952336600ed8186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 00:00:01 compute-0 systemd[1]: Started libpod-conmon-d168f02594774f5c429c694251efc8dc2051bdd507388aee9952336600ed8186.scope.
Jan 22 00:00:01 compute-0 podman[265527]: 2026-01-22 00:00:01.647840307 +0000 UTC m=+0.025965005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:00:01 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:00:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:00:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:01.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561adfb8628d5251f1f51df1c3c50a9305b0d91a3a529de175e7d34b926bac9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561adfb8628d5251f1f51df1c3c50a9305b0d91a3a529de175e7d34b926bac9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561adfb8628d5251f1f51df1c3c50a9305b0d91a3a529de175e7d34b926bac9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561adfb8628d5251f1f51df1c3c50a9305b0d91a3a529de175e7d34b926bac9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561adfb8628d5251f1f51df1c3c50a9305b0d91a3a529de175e7d34b926bac9f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:00:01 compute-0 podman[265527]: 2026-01-22 00:00:01.788168212 +0000 UTC m=+0.166292900 container init d168f02594774f5c429c694251efc8dc2051bdd507388aee9952336600ed8186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:00:01 compute-0 podman[265527]: 2026-01-22 00:00:01.802439473 +0000 UTC m=+0.180564081 container start d168f02594774f5c429c694251efc8dc2051bdd507388aee9952336600ed8186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_joliot, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 00:00:01 compute-0 podman[265527]: 2026-01-22 00:00:01.805922041 +0000 UTC m=+0.184046679 container attach d168f02594774f5c429c694251efc8dc2051bdd507388aee9952336600ed8186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 00:00:02 compute-0 ceph-mon[74318]: pgmap v1248: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 22 00:00:02 compute-0 xenodochial_joliot[265543]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:00:02 compute-0 xenodochial_joliot[265543]: --> relative data size: 1.0
Jan 22 00:00:02 compute-0 xenodochial_joliot[265543]: --> All data devices are unavailable
Jan 22 00:00:02 compute-0 systemd[1]: libpod-d168f02594774f5c429c694251efc8dc2051bdd507388aee9952336600ed8186.scope: Deactivated successfully.
Jan 22 00:00:02 compute-0 podman[265527]: 2026-01-22 00:00:02.647896875 +0000 UTC m=+1.026021493 container died d168f02594774f5c429c694251efc8dc2051bdd507388aee9952336600ed8186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_joliot, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 00:00:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 938 B/s wr, 65 op/s
Jan 22 00:00:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-561adfb8628d5251f1f51df1c3c50a9305b0d91a3a529de175e7d34b926bac9f-merged.mount: Deactivated successfully.
Jan 22 00:00:02 compute-0 podman[265527]: 2026-01-22 00:00:02.70652511 +0000 UTC m=+1.084649708 container remove d168f02594774f5c429c694251efc8dc2051bdd507388aee9952336600ed8186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_joliot, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 00:00:02 compute-0 systemd[1]: libpod-conmon-d168f02594774f5c429c694251efc8dc2051bdd507388aee9952336600ed8186.scope: Deactivated successfully.
Jan 22 00:00:02 compute-0 sudo[265419]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:00:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:02.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:00:02 compute-0 sudo[265571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:00:02 compute-0 sudo[265571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:02 compute-0 sudo[265571]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:00:02 compute-0 sudo[265596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:00:02 compute-0 sudo[265596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:02 compute-0 sudo[265596]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:02 compute-0 sudo[265621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:00:02 compute-0 sudo[265621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:02 compute-0 sudo[265621]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:03 compute-0 sudo[265646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:00:03 compute-0 sudo[265646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:03 compute-0 podman[265711]: 2026-01-22 00:00:03.42918541 +0000 UTC m=+0.069706208 container create d098103a58c046857b3a2a132f1054f234d7e469db280601df9898ef8dc04031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 00:00:03 compute-0 systemd[1]: Started libpod-conmon-d098103a58c046857b3a2a132f1054f234d7e469db280601df9898ef8dc04031.scope.
Jan 22 00:00:03 compute-0 podman[265711]: 2026-01-22 00:00:03.399312955 +0000 UTC m=+0.039833823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:00:03 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:00:03 compute-0 podman[265711]: 2026-01-22 00:00:03.527439102 +0000 UTC m=+0.167960000 container init d098103a58c046857b3a2a132f1054f234d7e469db280601df9898ef8dc04031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 00:00:03 compute-0 podman[265711]: 2026-01-22 00:00:03.539545476 +0000 UTC m=+0.180066304 container start d098103a58c046857b3a2a132f1054f234d7e469db280601df9898ef8dc04031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 22 00:00:03 compute-0 podman[265711]: 2026-01-22 00:00:03.543429577 +0000 UTC m=+0.183950465 container attach d098103a58c046857b3a2a132f1054f234d7e469db280601df9898ef8dc04031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:00:03 compute-0 clever_goodall[265729]: 167 167
Jan 22 00:00:03 compute-0 systemd[1]: libpod-d098103a58c046857b3a2a132f1054f234d7e469db280601df9898ef8dc04031.scope: Deactivated successfully.
Jan 22 00:00:03 compute-0 podman[265711]: 2026-01-22 00:00:03.54580403 +0000 UTC m=+0.186324818 container died d098103a58c046857b3a2a132f1054f234d7e469db280601df9898ef8dc04031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:00:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-412d3d75f3b966a48807ea1b266855642b163d924bb8bbdb8b54a03489b73b82-merged.mount: Deactivated successfully.
Jan 22 00:00:03 compute-0 podman[265711]: 2026-01-22 00:00:03.585063615 +0000 UTC m=+0.225584413 container remove d098103a58c046857b3a2a132f1054f234d7e469db280601df9898ef8dc04031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 00:00:03 compute-0 systemd[1]: libpod-conmon-d098103a58c046857b3a2a132f1054f234d7e469db280601df9898ef8dc04031.scope: Deactivated successfully.
Jan 22 00:00:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:03.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:03 compute-0 podman[265751]: 2026-01-22 00:00:03.779640869 +0000 UTC m=+0.053513988 container create 681e6be1375591b8f7d46328a90d5378716dc127b879eb5197798bea7854c17d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:00:03 compute-0 systemd[1]: Started libpod-conmon-681e6be1375591b8f7d46328a90d5378716dc127b879eb5197798bea7854c17d.scope.
Jan 22 00:00:03 compute-0 podman[265751]: 2026-01-22 00:00:03.754205422 +0000 UTC m=+0.028078641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:00:03 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ffa95070c5805a8c2d08d7f9137c83045ec263e12223500f7de4527badc74a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ffa95070c5805a8c2d08d7f9137c83045ec263e12223500f7de4527badc74a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ffa95070c5805a8c2d08d7f9137c83045ec263e12223500f7de4527badc74a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ffa95070c5805a8c2d08d7f9137c83045ec263e12223500f7de4527badc74a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:00:03 compute-0 podman[265751]: 2026-01-22 00:00:03.887920831 +0000 UTC m=+0.161794050 container init 681e6be1375591b8f7d46328a90d5378716dc127b879eb5197798bea7854c17d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:00:03 compute-0 podman[265751]: 2026-01-22 00:00:03.902472601 +0000 UTC m=+0.176345720 container start 681e6be1375591b8f7d46328a90d5378716dc127b879eb5197798bea7854c17d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:00:03 compute-0 podman[265751]: 2026-01-22 00:00:03.906880667 +0000 UTC m=+0.180753826 container attach 681e6be1375591b8f7d46328a90d5378716dc127b879eb5197798bea7854c17d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:00:04 compute-0 ceph-mon[74318]: pgmap v1249: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 938 B/s wr, 65 op/s
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]: {
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:     "1": [
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:         {
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:             "devices": [
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:                 "/dev/loop3"
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:             ],
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:             "lv_name": "ceph_lv0",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:             "lv_size": "7511998464",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:             "name": "ceph_lv0",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:             "tags": {
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:                 "ceph.cluster_name": "ceph",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:                 "ceph.crush_device_class": "",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:                 "ceph.encrypted": "0",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:                 "ceph.osd_id": "1",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:                 "ceph.type": "block",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:                 "ceph.vdo": "0"
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:             },
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:             "type": "block",
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:             "vg_name": "ceph_vg0"
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:         }
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]:     ]
Jan 22 00:00:04 compute-0 magical_kapitsa[265768]: }
Jan 22 00:00:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 852 B/s wr, 54 op/s
Jan 22 00:00:04 compute-0 systemd[1]: libpod-681e6be1375591b8f7d46328a90d5378716dc127b879eb5197798bea7854c17d.scope: Deactivated successfully.
Jan 22 00:00:04 compute-0 podman[265751]: 2026-01-22 00:00:04.70293588 +0000 UTC m=+0.976809019 container died 681e6be1375591b8f7d46328a90d5378716dc127b879eb5197798bea7854c17d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:00:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-87ffa95070c5805a8c2d08d7f9137c83045ec263e12223500f7de4527badc74a-merged.mount: Deactivated successfully.
Jan 22 00:00:04 compute-0 podman[265751]: 2026-01-22 00:00:04.77077336 +0000 UTC m=+1.044646509 container remove 681e6be1375591b8f7d46328a90d5378716dc127b879eb5197798bea7854c17d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 00:00:04 compute-0 systemd[1]: libpod-conmon-681e6be1375591b8f7d46328a90d5378716dc127b879eb5197798bea7854c17d.scope: Deactivated successfully.
Jan 22 00:00:04 compute-0 sudo[265646]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:00:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:04.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:00:04 compute-0 sudo[265789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:00:04 compute-0 sudo[265789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:04 compute-0 sudo[265789]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:05 compute-0 sudo[265814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:00:05 compute-0 sudo[265814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:05 compute-0 sudo[265814]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:05 compute-0 sudo[265839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:00:05 compute-0 sudo[265839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:05 compute-0 sudo[265839]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:05 compute-0 sudo[265864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:00:05 compute-0 sudo[265864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:05 compute-0 podman[265930]: 2026-01-22 00:00:05.585218832 +0000 UTC m=+0.044729866 container create 9df8a69506b912af05500174235e6630aac83fe4ffcf9475ab367f6d993ebd2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:00:05 compute-0 systemd[1]: Started libpod-conmon-9df8a69506b912af05500174235e6630aac83fe4ffcf9475ab367f6d993ebd2e.scope.
Jan 22 00:00:05 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:00:05 compute-0 podman[265930]: 2026-01-22 00:00:05.564832781 +0000 UTC m=+0.024343895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:00:05 compute-0 podman[265930]: 2026-01-22 00:00:05.66981024 +0000 UTC m=+0.129321374 container init 9df8a69506b912af05500174235e6630aac83fe4ffcf9475ab367f6d993ebd2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 00:00:05 compute-0 podman[265930]: 2026-01-22 00:00:05.681702218 +0000 UTC m=+0.141213272 container start 9df8a69506b912af05500174235e6630aac83fe4ffcf9475ab367f6d993ebd2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:00:05 compute-0 podman[265930]: 2026-01-22 00:00:05.686133276 +0000 UTC m=+0.145644400 container attach 9df8a69506b912af05500174235e6630aac83fe4ffcf9475ab367f6d993ebd2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 00:00:05 compute-0 clever_carson[265946]: 167 167
Jan 22 00:00:05 compute-0 systemd[1]: libpod-9df8a69506b912af05500174235e6630aac83fe4ffcf9475ab367f6d993ebd2e.scope: Deactivated successfully.
Jan 22 00:00:05 compute-0 podman[265930]: 2026-01-22 00:00:05.690243052 +0000 UTC m=+0.149754106 container died 9df8a69506b912af05500174235e6630aac83fe4ffcf9475ab367f6d993ebd2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 00:00:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d121c85010352beea2ffda2a4d32851acbb26b2fce966a88a2ca0274d04c1382-merged.mount: Deactivated successfully.
Jan 22 00:00:05 compute-0 podman[265930]: 2026-01-22 00:00:05.730399956 +0000 UTC m=+0.189911020 container remove 9df8a69506b912af05500174235e6630aac83fe4ffcf9475ab367f6d993ebd2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 00:00:05 compute-0 systemd[1]: libpod-conmon-9df8a69506b912af05500174235e6630aac83fe4ffcf9475ab367f6d993ebd2e.scope: Deactivated successfully.
Jan 22 00:00:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:00:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:05.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:00:05 compute-0 podman[265970]: 2026-01-22 00:00:05.932274355 +0000 UTC m=+0.047819411 container create 252c916c3c1fdb4729229f0430451650514fee8462b9599caea452983eb84768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:00:05 compute-0 systemd[1]: Started libpod-conmon-252c916c3c1fdb4729229f0430451650514fee8462b9599caea452983eb84768.scope.
Jan 22 00:00:06 compute-0 podman[265970]: 2026-01-22 00:00:05.91209392 +0000 UTC m=+0.027638996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:00:06 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b946a3ae54f6cf3a0c9120a78c0167dcc5487cf8c8dcd0c6b905c1aa0dc9ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b946a3ae54f6cf3a0c9120a78c0167dcc5487cf8c8dcd0c6b905c1aa0dc9ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b946a3ae54f6cf3a0c9120a78c0167dcc5487cf8c8dcd0c6b905c1aa0dc9ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b946a3ae54f6cf3a0c9120a78c0167dcc5487cf8c8dcd0c6b905c1aa0dc9ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:00:06 compute-0 podman[265970]: 2026-01-22 00:00:06.026703838 +0000 UTC m=+0.142248924 container init 252c916c3c1fdb4729229f0430451650514fee8462b9599caea452983eb84768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 00:00:06 compute-0 podman[265970]: 2026-01-22 00:00:06.039193475 +0000 UTC m=+0.154738521 container start 252c916c3c1fdb4729229f0430451650514fee8462b9599caea452983eb84768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:00:06 compute-0 podman[265970]: 2026-01-22 00:00:06.042713393 +0000 UTC m=+0.158258469 container attach 252c916c3c1fdb4729229f0430451650514fee8462b9599caea452983eb84768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:00:06 compute-0 ceph-mon[74318]: pgmap v1250: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 852 B/s wr, 54 op/s
Jan 22 00:00:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 852 B/s wr, 139 op/s
Jan 22 00:00:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:06.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:07 compute-0 sweet_dewdney[265986]: {
Jan 22 00:00:07 compute-0 sweet_dewdney[265986]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:00:07 compute-0 sweet_dewdney[265986]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:00:07 compute-0 sweet_dewdney[265986]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:00:07 compute-0 sweet_dewdney[265986]:         "osd_id": 1,
Jan 22 00:00:07 compute-0 sweet_dewdney[265986]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:00:07 compute-0 sweet_dewdney[265986]:         "type": "bluestore"
Jan 22 00:00:07 compute-0 sweet_dewdney[265986]:     }
Jan 22 00:00:07 compute-0 sweet_dewdney[265986]: }
Jan 22 00:00:07 compute-0 systemd[1]: libpod-252c916c3c1fdb4729229f0430451650514fee8462b9599caea452983eb84768.scope: Deactivated successfully.
Jan 22 00:00:07 compute-0 podman[265970]: 2026-01-22 00:00:07.056214667 +0000 UTC m=+1.171759723 container died 252c916c3c1fdb4729229f0430451650514fee8462b9599caea452983eb84768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 00:00:07 compute-0 systemd[1]: libpod-252c916c3c1fdb4729229f0430451650514fee8462b9599caea452983eb84768.scope: Consumed 1.019s CPU time.
Jan 22 00:00:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-56b946a3ae54f6cf3a0c9120a78c0167dcc5487cf8c8dcd0c6b905c1aa0dc9ea-merged.mount: Deactivated successfully.
Jan 22 00:00:07 compute-0 podman[265970]: 2026-01-22 00:00:07.115541773 +0000 UTC m=+1.231086839 container remove 252c916c3c1fdb4729229f0430451650514fee8462b9599caea452983eb84768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:00:07 compute-0 systemd[1]: libpod-conmon-252c916c3c1fdb4729229f0430451650514fee8462b9599caea452983eb84768.scope: Deactivated successfully.
Jan 22 00:00:07 compute-0 sudo[265864]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:00:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:00:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:00:07 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:00:07 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 197dfe21-1c07-4803-a150-3381e4f2772b does not exist
Jan 22 00:00:07 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 735c671a-1e94-4fc4-8e92-db654c1be8e1 does not exist
Jan 22 00:00:07 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 42e2e1bc-f40e-488d-bf1b-156d92255722 does not exist
Jan 22 00:00:07 compute-0 sudo[266020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:00:07 compute-0 sudo[266020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:07 compute-0 sudo[266020]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:07 compute-0 sudo[266045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:00:07 compute-0 sudo[266045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:07 compute-0 sudo[266045]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:00:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:07.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:00:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:00:08 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:00:08.348 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 00:00:08 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:00:08.353 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 00:00:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 79 KiB/s rd, 255 B/s wr, 133 op/s
Jan 22 00:00:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:00:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:08.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:00:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:00:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:00:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:00:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:00:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:00:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:00:09 compute-0 ceph-mon[74318]: pgmap v1251: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 852 B/s wr, 139 op/s
Jan 22 00:00:09 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:00:09 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:00:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:09.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:10 compute-0 ceph-mon[74318]: pgmap v1252: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 79 KiB/s rd, 255 B/s wr, 133 op/s
Jan 22 00:00:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 79 KiB/s rd, 255 B/s wr, 133 op/s
Jan 22 00:00:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:10.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:11 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:00:11.357 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 00:00:11 compute-0 sudo[266073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:00:11 compute-0 sudo[266073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:11 compute-0 sudo[266073]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:11 compute-0 sudo[266104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:00:11 compute-0 sudo[266104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:11 compute-0 sudo[266104]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:11 compute-0 podman[266097]: 2026-01-22 00:00:11.621183709 +0000 UTC m=+0.099589584 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 00:00:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:11.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:12 compute-0 ceph-mon[74318]: pgmap v1253: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 79 KiB/s rd, 255 B/s wr, 133 op/s
Jan 22 00:00:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 121 op/s
Jan 22 00:00:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:00:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:12.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:00:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:00:12 compute-0 nova_compute[247516]: 2026-01-22 00:00:12.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:00:12 compute-0 nova_compute[247516]: 2026-01-22 00:00:12.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:00:12 compute-0 nova_compute[247516]: 2026-01-22 00:00:12.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:00:13 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/750862182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:00:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:13.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:14 compute-0 ceph-mon[74318]: pgmap v1254: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 121 op/s
Jan 22 00:00:14 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3384514895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:00:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 84 op/s
Jan 22 00:00:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:14.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:15.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:16 compute-0 nova_compute[247516]: 2026-01-22 00:00:16.000 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:00:16 compute-0 ceph-mon[74318]: pgmap v1255: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 84 op/s
Jan 22 00:00:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 84 op/s
Jan 22 00:00:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:00:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:16.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:00:16 compute-0 nova_compute[247516]: 2026-01-22 00:00:16.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:00:16 compute-0 nova_compute[247516]: 2026-01-22 00:00:16.991 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:00:16 compute-0 nova_compute[247516]: 2026-01-22 00:00:16.991 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:00:17 compute-0 nova_compute[247516]: 2026-01-22 00:00:17.014 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:00:17 compute-0 nova_compute[247516]: 2026-01-22 00:00:17.014 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:00:17 compute-0 nova_compute[247516]: 2026-01-22 00:00:17.015 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 22 00:00:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:17.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:00:17 compute-0 podman[266150]: 2026-01-22 00:00:17.941527187 +0000 UTC m=+0.057629844 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 00:00:18 compute-0 ceph-mon[74318]: pgmap v1256: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 84 op/s
Jan 22 00:00:18 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2372706985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:00:18 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1718612275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:00:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:00:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:18.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:00:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:19.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:20 compute-0 nova_compute[247516]: 2026-01-22 00:00:20.005 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:00:20 compute-0 ceph-mon[74318]: pgmap v1257: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:00:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:20.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:00:20 compute-0 nova_compute[247516]: 2026-01-22 00:00:20.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:00:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:21.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:21 compute-0 nova_compute[247516]: 2026-01-22 00:00:21.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:00:22 compute-0 ceph-mon[74318]: pgmap v1258: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:00:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:22.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:00:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:00:22 compute-0 nova_compute[247516]: 2026-01-22 00:00:22.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:00:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:00:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:23.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:00:23 compute-0 nova_compute[247516]: 2026-01-22 00:00:23.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:00:23 compute-0 nova_compute[247516]: 2026-01-22 00:00:23.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:00:24 compute-0 nova_compute[247516]: 2026-01-22 00:00:24.032 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:00:24 compute-0 nova_compute[247516]: 2026-01-22 00:00:24.033 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:00:24 compute-0 nova_compute[247516]: 2026-01-22 00:00:24.033 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:00:24 compute-0 nova_compute[247516]: 2026-01-22 00:00:24.033 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:00:24 compute-0 nova_compute[247516]: 2026-01-22 00:00:24.033 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:00:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:00:24 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/668904983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:00:24 compute-0 nova_compute[247516]: 2026-01-22 00:00:24.460 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:00:24 compute-0 ceph-mon[74318]: pgmap v1259: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:24 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/668904983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:00:24 compute-0 nova_compute[247516]: 2026-01-22 00:00:24.606 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:00:24 compute-0 nova_compute[247516]: 2026-01-22 00:00:24.608 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5170MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:00:24 compute-0 nova_compute[247516]: 2026-01-22 00:00:24.608 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:00:24 compute-0 nova_compute[247516]: 2026-01-22 00:00:24.608 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:00:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:24 compute-0 nova_compute[247516]: 2026-01-22 00:00:24.796 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:00:24 compute-0 nova_compute[247516]: 2026-01-22 00:00:24.797 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:00:24 compute-0 nova_compute[247516]: 2026-01-22 00:00:24.797 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:00:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:00:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:24.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:00:24 compute-0 nova_compute[247516]: 2026-01-22 00:00:24.910 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:00:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:00:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1760747834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:00:25 compute-0 nova_compute[247516]: 2026-01-22 00:00:25.385 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:00:25 compute-0 nova_compute[247516]: 2026-01-22 00:00:25.391 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:00:25 compute-0 nova_compute[247516]: 2026-01-22 00:00:25.424 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:00:25 compute-0 nova_compute[247516]: 2026-01-22 00:00:25.429 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:00:25 compute-0 nova_compute[247516]: 2026-01-22 00:00:25.430 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:00:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1760747834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:00:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 00:00:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3170871970' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:00:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 00:00:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3170871970' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:00:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:00:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:25.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:00:26 compute-0 ceph-mon[74318]: pgmap v1260: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3170871970' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:00:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3170871970' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:00:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:26.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:27.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:00:28 compute-0 ceph-mon[74318]: pgmap v1261: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:00:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:28.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:00:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:29.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:29 compute-0 nova_compute[247516]: 2026-01-22 00:00:29.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:00:29 compute-0 nova_compute[247516]: 2026-01-22 00:00:29.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 22 00:00:30 compute-0 nova_compute[247516]: 2026-01-22 00:00:30.017 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 22 00:00:30 compute-0 ceph-mon[74318]: pgmap v1262: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:00:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:30.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:00:31 compute-0 sudo[266221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:00:31 compute-0 sudo[266221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:31 compute-0 sudo[266221]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:31 compute-0 sudo[266246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:00:31 compute-0 sudo[266246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:31 compute-0 sudo[266246]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:31.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:31 compute-0 ceph-mon[74318]: pgmap v1263: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:32.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:00:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:33.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:34 compute-0 ceph-mon[74318]: pgmap v1264: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:34.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:35.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:36 compute-0 ceph-mon[74318]: pgmap v1265: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:36.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:00:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:37.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:00:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:00:38 compute-0 ceph-mon[74318]: pgmap v1266: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:00:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:38.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:00:39
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', '.mgr', 'default.rgw.meta', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'images']
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:00:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:00:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:00:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:39.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:00:40 compute-0 ceph-mon[74318]: pgmap v1267: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:40.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:00:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:41.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:00:42 compute-0 podman[266276]: 2026-01-22 00:00:42.033353662 +0000 UTC m=+0.132800151 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 22 00:00:42 compute-0 ceph-mon[74318]: pgmap v1268: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:42.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:00:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:43.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:44 compute-0 ceph-mon[74318]: pgmap v1269: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:44.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:45.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:46 compute-0 ceph-mon[74318]: pgmap v1270: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:46.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:47.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:00:48 compute-0 ceph-mon[74318]: pgmap v1271: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:00:48.758 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:00:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:00:48.759 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:00:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:00:48.759 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:00:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:00:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:48.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:00:48 compute-0 podman[266305]: 2026-01-22 00:00:48.948153266 +0000 UTC m=+0.064920161 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 00:00:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:49.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:50 compute-0 ceph-mon[74318]: pgmap v1272: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:50.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:51.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:51 compute-0 sudo[266328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:00:51 compute-0 sudo[266328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:51 compute-0 sudo[266328]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:52 compute-0 sudo[266353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:00:52 compute-0 sudo[266353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:00:52 compute-0 sudo[266353]: pam_unix(sudo:session): session closed for user root
Jan 22 00:00:52 compute-0 ceph-mon[74318]: pgmap v1273: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:52.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:00:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:53.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:54 compute-0 ceph-mon[74318]: pgmap v1274: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:00:54.337747) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040054337799, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2219, "num_deletes": 501, "total_data_size": 3551798, "memory_usage": 3602232, "flush_reason": "Manual Compaction"}
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040054369065, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3487015, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26699, "largest_seqno": 28917, "table_properties": {"data_size": 3477609, "index_size": 5452, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 22709, "raw_average_key_size": 19, "raw_value_size": 3456805, "raw_average_value_size": 3026, "num_data_blocks": 240, "num_entries": 1142, "num_filter_entries": 1142, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769039848, "oldest_key_time": 1769039848, "file_creation_time": 1769040054, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 31543 microseconds, and 18308 cpu microseconds.
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:00:54.369284) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3487015 bytes OK
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:00:54.369334) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:00:54.372408) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:00:54.372441) EVENT_LOG_v1 {"time_micros": 1769040054372430, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:00:54.372466) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3541872, prev total WAL file size 3541872, number of live WAL files 2.
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:00:54.374373) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3405KB)], [62(10MB)]
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040054374528, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 14660573, "oldest_snapshot_seqno": -1}
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5228 keys, 8866476 bytes, temperature: kUnknown
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040054450373, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 8866476, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8831409, "index_size": 20877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 133233, "raw_average_key_size": 25, "raw_value_size": 8736795, "raw_average_value_size": 1671, "num_data_blocks": 843, "num_entries": 5228, "num_filter_entries": 5228, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769040054, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:00:54.451113) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 8866476 bytes
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:00:54.452895) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.1 rd, 116.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 10.7 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(6.7) write-amplify(2.5) OK, records in: 6244, records dropped: 1016 output_compression: NoCompression
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:00:54.452942) EVENT_LOG_v1 {"time_micros": 1769040054452922, "job": 34, "event": "compaction_finished", "compaction_time_micros": 76299, "compaction_time_cpu_micros": 49847, "output_level": 6, "num_output_files": 1, "total_output_size": 8866476, "num_input_records": 6244, "num_output_records": 5228, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040054454330, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040054458513, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:00:54.374288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:00:54.458628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:00:54.458636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:00:54.458639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:00:54.458643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:00:54 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:00:54.458646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:00:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:00:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:54.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:00:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:55.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:56 compute-0 ceph-mon[74318]: pgmap v1275: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:56.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:00:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:57.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:00:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:00:58 compute-0 ceph-mon[74318]: pgmap v1276: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:00:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:00:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:00:58.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:00:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:00:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:00:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:00:59.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:01:00 compute-0 ceph-mon[74318]: pgmap v1277: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:00.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:01 compute-0 CROND[266385]: (root) CMD (run-parts /etc/cron.hourly)
Jan 22 00:01:01 compute-0 run-parts[266388]: (/etc/cron.hourly) starting 0anacron
Jan 22 00:01:01 compute-0 anacron[266396]: Anacron started on 2026-01-22
Jan 22 00:01:01 compute-0 anacron[266396]: Job `cron.monthly' locked by another anacron - skipping
Jan 22 00:01:01 compute-0 anacron[266396]: Normal exit (0 jobs run)
Jan 22 00:01:01 compute-0 run-parts[266398]: (/etc/cron.hourly) finished 0anacron
Jan 22 00:01:01 compute-0 CROND[266384]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 22 00:01:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:01.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:02 compute-0 ceph-mon[74318]: pgmap v1278: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:01:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:02.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:01:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:01:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:03.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:04 compute-0 ceph-mon[74318]: pgmap v1279: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:04.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:05.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:06 compute-0 ceph-mon[74318]: pgmap v1280: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:06.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:07.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:07 compute-0 sudo[266402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:01:07 compute-0 sudo[266402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:07 compute-0 sudo[266402]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:01:07 compute-0 sudo[266427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:01:07 compute-0 sudo[266427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:07 compute-0 sudo[266427]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:08 compute-0 sudo[266452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:01:08 compute-0 sudo[266452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:08 compute-0 sudo[266452]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:08 compute-0 sudo[266477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 00:01:08 compute-0 sudo[266477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:08 compute-0 ceph-mon[74318]: pgmap v1281: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:08 compute-0 podman[266571]: 2026-01-22 00:01:08.878389646 +0000 UTC m=+0.105023151 container exec 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:01:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:08.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:09 compute-0 podman[266571]: 2026-01-22 00:01:09.009283769 +0000 UTC m=+0.235917204 container exec_died 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 00:01:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:01:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:01:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:01:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:01:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:01:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:01:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 00:01:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:01:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 00:01:09 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:01:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:09.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:09 compute-0 podman[266722]: 2026-01-22 00:01:09.916295296 +0000 UTC m=+0.094981591 container exec fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 22 00:01:09 compute-0 podman[266722]: 2026-01-22 00:01:09.938773792 +0000 UTC m=+0.117460087 container exec_died fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 22 00:01:10 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:01:10.108 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 00:01:10 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:01:10.110 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 00:01:10 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:01:10.112 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 00:01:10 compute-0 podman[266788]: 2026-01-22 00:01:10.257283731 +0000 UTC m=+0.088905293 container exec 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 22 00:01:10 compute-0 podman[266788]: 2026-01-22 00:01:10.281298054 +0000 UTC m=+0.112919556 container exec_died 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, com.redhat.component=keepalived-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., version=2.2.4, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, name=keepalived, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 22 00:01:10 compute-0 sudo[266477]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:01:10 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:01:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:01:10 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:01:10 compute-0 sudo[266822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:01:10 compute-0 sudo[266822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:10 compute-0 sudo[266822]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:10 compute-0 ceph-mon[74318]: pgmap v1282: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:01:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:01:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:01:10 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:01:10 compute-0 sudo[266847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:01:10 compute-0 sudo[266847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:10 compute-0 sudo[266847]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:10 compute-0 sudo[266872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:01:10 compute-0 sudo[266872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:10 compute-0 sudo[266872]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:10 compute-0 sudo[266897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 00:01:10 compute-0 sudo[266897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:10.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:11 compute-0 sudo[266897]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:01:11 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:01:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:01:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:01:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:01:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:01:11 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev b8c18cf7-4f06-4adc-a90f-0a145a24dd10 does not exist
Jan 22 00:01:11 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 8aeb8bb2-90ca-4579-9003-c38a4d052008 does not exist
Jan 22 00:01:11 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 1f1104a9-e851-42dd-a5cd-1d03ce0fbdf8 does not exist
Jan 22 00:01:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:01:11 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:01:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:01:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:01:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:01:11 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:01:11 compute-0 sudo[266954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:01:11 compute-0 sudo[266954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:11 compute-0 sudo[266954]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:11 compute-0 sudo[266979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:01:11 compute-0 sudo[266979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:11 compute-0 sudo[266979]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:11 compute-0 sudo[267005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:01:11 compute-0 sudo[267005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:11 compute-0 sudo[267005]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:01:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:01:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:01:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:01:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:01:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:01:11 compute-0 sudo[267030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:01:11 compute-0 sudo[267030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:11.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:12 compute-0 podman[267097]: 2026-01-22 00:01:12.008016937 +0000 UTC m=+0.054030354 container create 8d437fdfa0afa5e232e2c27c382d2579566018a2c048417bdd2c7c3fae2eafff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 00:01:12 compute-0 systemd[1]: Started libpod-conmon-8d437fdfa0afa5e232e2c27c382d2579566018a2c048417bdd2c7c3fae2eafff.scope.
Jan 22 00:01:12 compute-0 podman[267097]: 2026-01-22 00:01:11.981687021 +0000 UTC m=+0.027700518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:01:12 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:01:12 compute-0 sudo[267111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:01:12 compute-0 sudo[267111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:12 compute-0 sudo[267111]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:12 compute-0 podman[267097]: 2026-01-22 00:01:12.129970321 +0000 UTC m=+0.175983798 container init 8d437fdfa0afa5e232e2c27c382d2579566018a2c048417bdd2c7c3fae2eafff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_liskov, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:01:12 compute-0 podman[267097]: 2026-01-22 00:01:12.146872145 +0000 UTC m=+0.192885592 container start 8d437fdfa0afa5e232e2c27c382d2579566018a2c048417bdd2c7c3fae2eafff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:01:12 compute-0 podman[267097]: 2026-01-22 00:01:12.151281931 +0000 UTC m=+0.197295438 container attach 8d437fdfa0afa5e232e2c27c382d2579566018a2c048417bdd2c7c3fae2eafff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 00:01:12 compute-0 thirsty_liskov[267125]: 167 167
Jan 22 00:01:12 compute-0 systemd[1]: libpod-8d437fdfa0afa5e232e2c27c382d2579566018a2c048417bdd2c7c3fae2eafff.scope: Deactivated successfully.
Jan 22 00:01:12 compute-0 podman[267097]: 2026-01-22 00:01:12.153543231 +0000 UTC m=+0.199556648 container died 8d437fdfa0afa5e232e2c27c382d2579566018a2c048417bdd2c7c3fae2eafff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 00:01:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf737e25f15c34b3522c56f00449f242fa75ada514abfb610aae75d8eea388e5-merged.mount: Deactivated successfully.
Jan 22 00:01:12 compute-0 podman[267097]: 2026-01-22 00:01:12.205332784 +0000 UTC m=+0.251346201 container remove 8d437fdfa0afa5e232e2c27c382d2579566018a2c048417bdd2c7c3fae2eafff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:01:12 compute-0 sudo[267153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:01:12 compute-0 sudo[267153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:12 compute-0 sudo[267153]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:12 compute-0 systemd[1]: libpod-conmon-8d437fdfa0afa5e232e2c27c382d2579566018a2c048417bdd2c7c3fae2eafff.scope: Deactivated successfully.
Jan 22 00:01:12 compute-0 podman[267126]: 2026-01-22 00:01:12.27979664 +0000 UTC m=+0.190524979 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 22 00:01:12 compute-0 podman[267211]: 2026-01-22 00:01:12.446264502 +0000 UTC m=+0.073801015 container create caa66e31bac1b46e990a3712d373660fa5f1d6d856e6d83345ced0fac75de473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 00:01:12 compute-0 systemd[1]: Started libpod-conmon-caa66e31bac1b46e990a3712d373660fa5f1d6d856e6d83345ced0fac75de473.scope.
Jan 22 00:01:12 compute-0 podman[267211]: 2026-01-22 00:01:12.417160931 +0000 UTC m=+0.044697464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:01:12 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:01:12 compute-0 ceph-mon[74318]: pgmap v1283: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b5d13dea1c4e60b8c952289292335ec0314494d51a37b3a54aae2f18051cd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b5d13dea1c4e60b8c952289292335ec0314494d51a37b3a54aae2f18051cd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b5d13dea1c4e60b8c952289292335ec0314494d51a37b3a54aae2f18051cd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b5d13dea1c4e60b8c952289292335ec0314494d51a37b3a54aae2f18051cd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b5d13dea1c4e60b8c952289292335ec0314494d51a37b3a54aae2f18051cd4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:01:12 compute-0 podman[267211]: 2026-01-22 00:01:12.560032694 +0000 UTC m=+0.187569207 container init caa66e31bac1b46e990a3712d373660fa5f1d6d856e6d83345ced0fac75de473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_neumann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:01:12 compute-0 podman[267211]: 2026-01-22 00:01:12.575296467 +0000 UTC m=+0.202832970 container start caa66e31bac1b46e990a3712d373660fa5f1d6d856e6d83345ced0fac75de473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_neumann, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 00:01:12 compute-0 podman[267211]: 2026-01-22 00:01:12.579898649 +0000 UTC m=+0.207435222 container attach caa66e31bac1b46e990a3712d373660fa5f1d6d856e6d83345ced0fac75de473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_neumann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 00:01:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:01:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:12.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:13 compute-0 zen_neumann[267228]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:01:13 compute-0 zen_neumann[267228]: --> relative data size: 1.0
Jan 22 00:01:13 compute-0 zen_neumann[267228]: --> All data devices are unavailable
Jan 22 00:01:13 compute-0 systemd[1]: libpod-caa66e31bac1b46e990a3712d373660fa5f1d6d856e6d83345ced0fac75de473.scope: Deactivated successfully.
Jan 22 00:01:13 compute-0 podman[267211]: 2026-01-22 00:01:13.426729944 +0000 UTC m=+1.054266447 container died caa66e31bac1b46e990a3712d373660fa5f1d6d856e6d83345ced0fac75de473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 00:01:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-51b5d13dea1c4e60b8c952289292335ec0314494d51a37b3a54aae2f18051cd4-merged.mount: Deactivated successfully.
Jan 22 00:01:13 compute-0 podman[267211]: 2026-01-22 00:01:13.630993817 +0000 UTC m=+1.258530290 container remove caa66e31bac1b46e990a3712d373660fa5f1d6d856e6d83345ced0fac75de473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:01:13 compute-0 systemd[1]: libpod-conmon-caa66e31bac1b46e990a3712d373660fa5f1d6d856e6d83345ced0fac75de473.scope: Deactivated successfully.
Jan 22 00:01:13 compute-0 sudo[267030]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:13 compute-0 sudo[267257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:01:13 compute-0 sudo[267257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:13 compute-0 sudo[267257]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:13 compute-0 sudo[267282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:01:13 compute-0 sudo[267282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:13 compute-0 sudo[267282]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:13.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:13 compute-0 sudo[267307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:01:13 compute-0 sudo[267307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:13 compute-0 sudo[267307]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:14 compute-0 sudo[267332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:01:14 compute-0 sudo[267332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:14 compute-0 nova_compute[247516]: 2026-01-22 00:01:14.017 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:01:14 compute-0 nova_compute[247516]: 2026-01-22 00:01:14.020 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:01:14 compute-0 podman[267399]: 2026-01-22 00:01:14.410391683 +0000 UTC m=+0.051687761 container create c126d365fc9ab9ed0c5515760a347c119fe1dd74e2a4e04c0280d263f3736581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cori, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:01:14 compute-0 systemd[1]: Started libpod-conmon-c126d365fc9ab9ed0c5515760a347c119fe1dd74e2a4e04c0280d263f3736581.scope.
Jan 22 00:01:14 compute-0 podman[267399]: 2026-01-22 00:01:14.389645621 +0000 UTC m=+0.030941709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:01:14 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:01:14 compute-0 podman[267399]: 2026-01-22 00:01:14.519099619 +0000 UTC m=+0.160395757 container init c126d365fc9ab9ed0c5515760a347c119fe1dd74e2a4e04c0280d263f3736581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cori, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:01:14 compute-0 podman[267399]: 2026-01-22 00:01:14.529460779 +0000 UTC m=+0.170756827 container start c126d365fc9ab9ed0c5515760a347c119fe1dd74e2a4e04c0280d263f3736581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cori, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:01:14 compute-0 podman[267399]: 2026-01-22 00:01:14.532860475 +0000 UTC m=+0.174156553 container attach c126d365fc9ab9ed0c5515760a347c119fe1dd74e2a4e04c0280d263f3736581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:01:14 compute-0 intelligent_cori[267415]: 167 167
Jan 22 00:01:14 compute-0 systemd[1]: libpod-c126d365fc9ab9ed0c5515760a347c119fe1dd74e2a4e04c0280d263f3736581.scope: Deactivated successfully.
Jan 22 00:01:14 compute-0 conmon[267415]: conmon c126d365fc9ab9ed0c55 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c126d365fc9ab9ed0c5515760a347c119fe1dd74e2a4e04c0280d263f3736581.scope/container/memory.events
Jan 22 00:01:14 compute-0 podman[267399]: 2026-01-22 00:01:14.538227741 +0000 UTC m=+0.179523809 container died c126d365fc9ab9ed0c5515760a347c119fe1dd74e2a4e04c0280d263f3736581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cori, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 00:01:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d0a116fefac63392dbb5ea39f674cb68006c4ffd394c678ae58f54793db9c26-merged.mount: Deactivated successfully.
Jan 22 00:01:14 compute-0 podman[267399]: 2026-01-22 00:01:14.589769106 +0000 UTC m=+0.231065184 container remove c126d365fc9ab9ed0c5515760a347c119fe1dd74e2a4e04c0280d263f3736581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cori, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:01:14 compute-0 systemd[1]: libpod-conmon-c126d365fc9ab9ed0c5515760a347c119fe1dd74e2a4e04c0280d263f3736581.scope: Deactivated successfully.
Jan 22 00:01:14 compute-0 ceph-mon[74318]: pgmap v1284: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:14 compute-0 podman[267439]: 2026-01-22 00:01:14.857883506 +0000 UTC m=+0.063024782 container create d6975e94870c964abee5307c0ab258539ab3bfa4e8aac2a89fd99385442ace0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Jan 22 00:01:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:14.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:14 compute-0 systemd[1]: Started libpod-conmon-d6975e94870c964abee5307c0ab258539ab3bfa4e8aac2a89fd99385442ace0b.scope.
Jan 22 00:01:14 compute-0 podman[267439]: 2026-01-22 00:01:14.83539871 +0000 UTC m=+0.040540046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:01:14 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4f0a87d15cba0b79bfc73f144afeeb159f1721972338cbdbcceae0a1c48cd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4f0a87d15cba0b79bfc73f144afeeb159f1721972338cbdbcceae0a1c48cd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4f0a87d15cba0b79bfc73f144afeeb159f1721972338cbdbcceae0a1c48cd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4f0a87d15cba0b79bfc73f144afeeb159f1721972338cbdbcceae0a1c48cd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:01:14 compute-0 podman[267439]: 2026-01-22 00:01:14.965664092 +0000 UTC m=+0.170805378 container init d6975e94870c964abee5307c0ab258539ab3bfa4e8aac2a89fd99385442ace0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_engelbart, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 00:01:14 compute-0 podman[267439]: 2026-01-22 00:01:14.978120948 +0000 UTC m=+0.183262244 container start d6975e94870c964abee5307c0ab258539ab3bfa4e8aac2a89fd99385442ace0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_engelbart, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 00:01:14 compute-0 podman[267439]: 2026-01-22 00:01:14.983479564 +0000 UTC m=+0.188620860 container attach d6975e94870c964abee5307c0ab258539ab3bfa4e8aac2a89fd99385442ace0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_engelbart, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:01:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Jan 22 00:01:15 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Jan 22 00:01:15 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1732324223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]: {
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:     "1": [
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:         {
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:             "devices": [
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:                 "/dev/loop3"
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:             ],
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:             "lv_name": "ceph_lv0",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:             "lv_size": "7511998464",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:             "name": "ceph_lv0",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:             "tags": {
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:                 "ceph.cluster_name": "ceph",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:                 "ceph.crush_device_class": "",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:                 "ceph.encrypted": "0",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:                 "ceph.osd_id": "1",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:                 "ceph.type": "block",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:                 "ceph.vdo": "0"
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:             },
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:             "type": "block",
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:             "vg_name": "ceph_vg0"
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:         }
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]:     ]
Jan 22 00:01:15 compute-0 flamboyant_engelbart[267455]: }
Jan 22 00:01:15 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Jan 22 00:01:15 compute-0 systemd[1]: libpod-d6975e94870c964abee5307c0ab258539ab3bfa4e8aac2a89fd99385442ace0b.scope: Deactivated successfully.
Jan 22 00:01:15 compute-0 podman[267439]: 2026-01-22 00:01:15.703506573 +0000 UTC m=+0.908647859 container died d6975e94870c964abee5307c0ab258539ab3bfa4e8aac2a89fd99385442ace0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_engelbart, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:01:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d4f0a87d15cba0b79bfc73f144afeeb159f1721972338cbdbcceae0a1c48cd4-merged.mount: Deactivated successfully.
Jan 22 00:01:15 compute-0 podman[267439]: 2026-01-22 00:01:15.758226446 +0000 UTC m=+0.963367692 container remove d6975e94870c964abee5307c0ab258539ab3bfa4e8aac2a89fd99385442ace0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 00:01:15 compute-0 systemd[1]: libpod-conmon-d6975e94870c964abee5307c0ab258539ab3bfa4e8aac2a89fd99385442ace0b.scope: Deactivated successfully.
Jan 22 00:01:15 compute-0 sudo[267332]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:15 compute-0 sudo[267478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:01:15 compute-0 sudo[267478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:15 compute-0 sudo[267478]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:15.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:15 compute-0 sudo[267503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:01:15 compute-0 sudo[267503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:15 compute-0 sudo[267503]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:16 compute-0 sudo[267528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:01:16 compute-0 sudo[267528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:16 compute-0 sudo[267528]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:16 compute-0 sudo[267553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:01:16 compute-0 sudo[267553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:16 compute-0 podman[267620]: 2026-01-22 00:01:16.558366165 +0000 UTC m=+0.058146501 container create 8682dce6fffc7972a04fc0ca7bae8e0e8182dc64484ea233f2fd28dd24770c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:01:16 compute-0 systemd[1]: Started libpod-conmon-8682dce6fffc7972a04fc0ca7bae8e0e8182dc64484ea233f2fd28dd24770c3c.scope.
Jan 22 00:01:16 compute-0 podman[267620]: 2026-01-22 00:01:16.530660527 +0000 UTC m=+0.030440943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:01:16 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:01:16 compute-0 podman[267620]: 2026-01-22 00:01:16.649356872 +0000 UTC m=+0.149137238 container init 8682dce6fffc7972a04fc0ca7bae8e0e8182dc64484ea233f2fd28dd24770c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_johnson, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:01:16 compute-0 podman[267620]: 2026-01-22 00:01:16.65963744 +0000 UTC m=+0.159417776 container start 8682dce6fffc7972a04fc0ca7bae8e0e8182dc64484ea233f2fd28dd24770c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 00:01:16 compute-0 podman[267620]: 2026-01-22 00:01:16.66287562 +0000 UTC m=+0.162655956 container attach 8682dce6fffc7972a04fc0ca7bae8e0e8182dc64484ea233f2fd28dd24770c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_johnson, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:01:16 compute-0 pedantic_johnson[267636]: 167 167
Jan 22 00:01:16 compute-0 systemd[1]: libpod-8682dce6fffc7972a04fc0ca7bae8e0e8182dc64484ea233f2fd28dd24770c3c.scope: Deactivated successfully.
Jan 22 00:01:16 compute-0 podman[267620]: 2026-01-22 00:01:16.667489793 +0000 UTC m=+0.167270159 container died 8682dce6fffc7972a04fc0ca7bae8e0e8182dc64484ea233f2fd28dd24770c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_johnson, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 00:01:16 compute-0 ceph-mon[74318]: pgmap v1285: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:16 compute-0 ceph-mon[74318]: osdmap e166: 3 total, 3 up, 3 in
Jan 22 00:01:16 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1595100794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:01:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa1c72a370e9e9da5fd88d9115bc28180f10d2a4dbc5150696495b6707dbac9d-merged.mount: Deactivated successfully.
Jan 22 00:01:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 409 B/s wr, 4 op/s
Jan 22 00:01:16 compute-0 podman[267620]: 2026-01-22 00:01:16.71682005 +0000 UTC m=+0.216600406 container remove 8682dce6fffc7972a04fc0ca7bae8e0e8182dc64484ea233f2fd28dd24770c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_johnson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:01:16 compute-0 systemd[1]: libpod-conmon-8682dce6fffc7972a04fc0ca7bae8e0e8182dc64484ea233f2fd28dd24770c3c.scope: Deactivated successfully.
Jan 22 00:01:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:16.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:16 compute-0 podman[267660]: 2026-01-22 00:01:16.96587751 +0000 UTC m=+0.066887002 container create 15e433d5a0d3b534b2127f66b5c547d154389a2649922542da20cd210e67ac34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chaplygin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:01:16 compute-0 nova_compute[247516]: 2026-01-22 00:01:16.994 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:01:16 compute-0 nova_compute[247516]: 2026-01-22 00:01:16.996 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:01:16 compute-0 nova_compute[247516]: 2026-01-22 00:01:16.996 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:01:17 compute-0 systemd[1]: Started libpod-conmon-15e433d5a0d3b534b2127f66b5c547d154389a2649922542da20cd210e67ac34.scope.
Jan 22 00:01:17 compute-0 nova_compute[247516]: 2026-01-22 00:01:17.018 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:01:17 compute-0 podman[267660]: 2026-01-22 00:01:16.942867307 +0000 UTC m=+0.043876789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:01:17 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10ceaa7f5ddc1d8fcf992c93d65d8e8ca8e5cc2dce88e7c929ffc0d20851af0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10ceaa7f5ddc1d8fcf992c93d65d8e8ca8e5cc2dce88e7c929ffc0d20851af0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10ceaa7f5ddc1d8fcf992c93d65d8e8ca8e5cc2dce88e7c929ffc0d20851af0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10ceaa7f5ddc1d8fcf992c93d65d8e8ca8e5cc2dce88e7c929ffc0d20851af0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:01:17 compute-0 podman[267660]: 2026-01-22 00:01:17.077914148 +0000 UTC m=+0.178923690 container init 15e433d5a0d3b534b2127f66b5c547d154389a2649922542da20cd210e67ac34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Jan 22 00:01:17 compute-0 podman[267660]: 2026-01-22 00:01:17.08996404 +0000 UTC m=+0.190973492 container start 15e433d5a0d3b534b2127f66b5c547d154389a2649922542da20cd210e67ac34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 00:01:17 compute-0 podman[267660]: 2026-01-22 00:01:17.094094259 +0000 UTC m=+0.195103711 container attach 15e433d5a0d3b534b2127f66b5c547d154389a2649922542da20cd210e67ac34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:01:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Jan 22 00:01:17 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/81214439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:01:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Jan 22 00:01:17 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Jan 22 00:01:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:17.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:01:17 compute-0 magical_chaplygin[267676]: {
Jan 22 00:01:17 compute-0 magical_chaplygin[267676]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:01:17 compute-0 magical_chaplygin[267676]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:01:17 compute-0 magical_chaplygin[267676]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:01:17 compute-0 magical_chaplygin[267676]:         "osd_id": 1,
Jan 22 00:01:17 compute-0 magical_chaplygin[267676]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:01:17 compute-0 magical_chaplygin[267676]:         "type": "bluestore"
Jan 22 00:01:17 compute-0 magical_chaplygin[267676]:     }
Jan 22 00:01:17 compute-0 magical_chaplygin[267676]: }
Jan 22 00:01:17 compute-0 systemd[1]: libpod-15e433d5a0d3b534b2127f66b5c547d154389a2649922542da20cd210e67ac34.scope: Deactivated successfully.
Jan 22 00:01:17 compute-0 conmon[267676]: conmon 15e433d5a0d3b534b212 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15e433d5a0d3b534b2127f66b5c547d154389a2649922542da20cd210e67ac34.scope/container/memory.events
Jan 22 00:01:17 compute-0 podman[267660]: 2026-01-22 00:01:17.997728911 +0000 UTC m=+1.098738383 container died 15e433d5a0d3b534b2127f66b5c547d154389a2649922542da20cd210e67ac34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chaplygin, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:01:18 compute-0 nova_compute[247516]: 2026-01-22 00:01:18.012 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:01:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-10ceaa7f5ddc1d8fcf992c93d65d8e8ca8e5cc2dce88e7c929ffc0d20851af0b-merged.mount: Deactivated successfully.
Jan 22 00:01:18 compute-0 podman[267660]: 2026-01-22 00:01:18.053106075 +0000 UTC m=+1.154115527 container remove 15e433d5a0d3b534b2127f66b5c547d154389a2649922542da20cd210e67ac34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chaplygin, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:01:18 compute-0 systemd[1]: libpod-conmon-15e433d5a0d3b534b2127f66b5c547d154389a2649922542da20cd210e67ac34.scope: Deactivated successfully.
Jan 22 00:01:18 compute-0 sudo[267553]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:01:18 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:01:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:01:18 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:01:18 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 1d13f4ad-041b-4bfe-961c-6fa1ff4d3c6a does not exist
Jan 22 00:01:18 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev b8438f6f-1fce-494c-97f6-5cf6a7c51f52 does not exist
Jan 22 00:01:18 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 3dce8c49-e27d-4b48-9651-6d99e8f8bac5 does not exist
Jan 22 00:01:18 compute-0 sudo[267709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:01:18 compute-0 sudo[267709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:18 compute-0 sudo[267709]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:18 compute-0 sudo[267734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:01:18 compute-0 sudo[267734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:18 compute-0 sudo[267734]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:18 compute-0 ceph-mon[74318]: pgmap v1287: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 409 B/s wr, 4 op/s
Jan 22 00:01:18 compute-0 ceph-mon[74318]: osdmap e167: 3 total, 3 up, 3 in
Jan 22 00:01:18 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:01:18 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:01:18 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3150718798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:01:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 511 B/s wr, 6 op/s
Jan 22 00:01:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:18.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:18 compute-0 nova_compute[247516]: 2026-01-22 00:01:18.986 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:01:19 compute-0 ceph-mon[74318]: pgmap v1289: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 511 B/s wr, 6 op/s
Jan 22 00:01:19 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2368711840' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:01:19 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2368711840' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:01:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:19.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:20 compute-0 podman[267760]: 2026-01-22 00:01:20.017913096 +0000 UTC m=+0.113992310 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 00:01:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 2.1 KiB/s wr, 40 op/s
Jan 22 00:01:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:20.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:20 compute-0 nova_compute[247516]: 2026-01-22 00:01:20.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:01:21 compute-0 ceph-mon[74318]: pgmap v1290: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 2.1 KiB/s wr, 40 op/s
Jan 22 00:01:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:21.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:21 compute-0 nova_compute[247516]: 2026-01-22 00:01:21.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:01:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 2.1 KiB/s wr, 45 op/s
Jan 22 00:01:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:01:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Jan 22 00:01:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:22.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Jan 22 00:01:22 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Jan 22 00:01:22 compute-0 nova_compute[247516]: 2026-01-22 00:01:22.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:01:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:23.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:23 compute-0 ceph-mon[74318]: pgmap v1291: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 2.1 KiB/s wr, 45 op/s
Jan 22 00:01:23 compute-0 ceph-mon[74318]: osdmap e168: 3 total, 3 up, 3 in
Jan 22 00:01:23 compute-0 nova_compute[247516]: 2026-01-22 00:01:23.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:01:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 1.6 KiB/s wr, 39 op/s
Jan 22 00:01:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:24.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:24 compute-0 nova_compute[247516]: 2026-01-22 00:01:24.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:01:25 compute-0 nova_compute[247516]: 2026-01-22 00:01:25.023 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:01:25 compute-0 nova_compute[247516]: 2026-01-22 00:01:25.024 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:01:25 compute-0 nova_compute[247516]: 2026-01-22 00:01:25.024 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:01:25 compute-0 nova_compute[247516]: 2026-01-22 00:01:25.025 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:01:25 compute-0 nova_compute[247516]: 2026-01-22 00:01:25.025 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:01:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:01:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481274063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:01:25 compute-0 nova_compute[247516]: 2026-01-22 00:01:25.501 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:01:25 compute-0 nova_compute[247516]: 2026-01-22 00:01:25.725 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:01:25 compute-0 nova_compute[247516]: 2026-01-22 00:01:25.727 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5118MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:01:25 compute-0 nova_compute[247516]: 2026-01-22 00:01:25.727 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:01:25 compute-0 nova_compute[247516]: 2026-01-22 00:01:25.727 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:01:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:01:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:25.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:01:25 compute-0 ceph-mon[74318]: pgmap v1293: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 1.6 KiB/s wr, 39 op/s
Jan 22 00:01:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1481274063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:01:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3634341512' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:01:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3634341512' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:01:26 compute-0 nova_compute[247516]: 2026-01-22 00:01:26.012 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:01:26 compute-0 nova_compute[247516]: 2026-01-22 00:01:26.012 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:01:26 compute-0 nova_compute[247516]: 2026-01-22 00:01:26.013 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:01:26 compute-0 nova_compute[247516]: 2026-01-22 00:01:26.090 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing inventories for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 22 00:01:26 compute-0 nova_compute[247516]: 2026-01-22 00:01:26.556 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Updating ProviderTree inventory for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 22 00:01:26 compute-0 nova_compute[247516]: 2026-01-22 00:01:26.557 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Updating inventory in ProviderTree for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 00:01:26 compute-0 nova_compute[247516]: 2026-01-22 00:01:26.587 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing aggregate associations for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 22 00:01:26 compute-0 nova_compute[247516]: 2026-01-22 00:01:26.648 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing trait associations for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 22 00:01:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 KiB/s wr, 50 op/s
Jan 22 00:01:26 compute-0 nova_compute[247516]: 2026-01-22 00:01:26.726 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:01:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:01:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:26.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:01:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:01:27 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2071308455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:01:27 compute-0 nova_compute[247516]: 2026-01-22 00:01:27.202 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:01:27 compute-0 nova_compute[247516]: 2026-01-22 00:01:27.210 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:01:27 compute-0 nova_compute[247516]: 2026-01-22 00:01:27.230 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:01:27 compute-0 nova_compute[247516]: 2026-01-22 00:01:27.232 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:01:27 compute-0 nova_compute[247516]: 2026-01-22 00:01:27.232 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:01:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:01:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:27.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:27 compute-0 ceph-mon[74318]: pgmap v1294: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 KiB/s wr, 50 op/s
Jan 22 00:01:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2071308455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:01:28 compute-0 nova_compute[247516]: 2026-01-22 00:01:28.233 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:01:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 KiB/s wr, 45 op/s
Jan 22 00:01:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:28.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:29.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:30 compute-0 ceph-mon[74318]: pgmap v1295: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 KiB/s wr, 45 op/s
Jan 22 00:01:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 76 MiB data, 284 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 44 op/s
Jan 22 00:01:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:30.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:31.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:32 compute-0 sudo[267829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:01:32 compute-0 sudo[267829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:32 compute-0 sudo[267829]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:32 compute-0 sudo[267854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:01:32 compute-0 sudo[267854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:32 compute-0 sudo[267854]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 22 00:01:32 compute-0 ceph-mon[74318]: pgmap v1296: 305 pgs: 305 active+clean; 76 MiB data, 284 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 44 op/s
Jan 22 00:01:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:01:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:32.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:33 compute-0 ceph-mon[74318]: pgmap v1297: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 22 00:01:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:33.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 48 op/s
Jan 22 00:01:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:34.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:35 compute-0 ceph-mon[74318]: pgmap v1298: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 48 op/s
Jan 22 00:01:35 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2408762471' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:01:35 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2408762471' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:01:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:35.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 48 op/s
Jan 22 00:01:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:36.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:37 compute-0 ceph-mon[74318]: pgmap v1299: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 48 op/s
Jan 22 00:01:37 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/4098712181' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:01:37 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/4098712181' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:01:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:01:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:37.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 00:01:38 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3323104198' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:01:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 00:01:38 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3323104198' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:01:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 22 00:01:38 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3323104198' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:01:38 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3323104198' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:01:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:38.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:01:39
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', '.rgw.root']
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:01:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:01:39 compute-0 ceph-mon[74318]: pgmap v1300: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 22 00:01:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:39.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Jan 22 00:01:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:40.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:41 compute-0 ceph-mon[74318]: pgmap v1301: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Jan 22 00:01:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:01:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:41.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:01:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 141 KiB/s wr, 54 op/s
Jan 22 00:01:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:01:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:42.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:42 compute-0 podman[267884]: 2026-01-22 00:01:42.996441993 +0000 UTC m=+0.102685381 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 00:01:43 compute-0 ceph-mon[74318]: pgmap v1302: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 141 KiB/s wr, 54 op/s
Jan 22 00:01:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:43.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 1.1 KiB/s wr, 41 op/s
Jan 22 00:01:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:44.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:45 compute-0 ceph-mon[74318]: pgmap v1303: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 1.1 KiB/s wr, 41 op/s
Jan 22 00:01:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:45.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 1.1 KiB/s wr, 41 op/s
Jan 22 00:01:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:46.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:01:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:47.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:47 compute-0 ceph-mon[74318]: pgmap v1304: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 1.1 KiB/s wr, 41 op/s
Jan 22 00:01:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 767 B/s wr, 39 op/s
Jan 22 00:01:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:01:48.760 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:01:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:01:48.761 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:01:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:01:48.761 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:01:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:48.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:49.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:49 compute-0 ceph-mon[74318]: pgmap v1305: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 767 B/s wr, 39 op/s
Jan 22 00:01:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 767 B/s wr, 39 op/s
Jan 22 00:01:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:50.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:50 compute-0 podman[267914]: 2026-01-22 00:01:50.959290815 +0000 UTC m=+0.067935753 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 00:01:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:51.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:51 compute-0 ceph-mon[74318]: pgmap v1306: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 767 B/s wr, 39 op/s
Jan 22 00:01:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 9.4 KiB/s rd, 0 B/s wr, 12 op/s
Jan 22 00:01:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:01:52 compute-0 sudo[267934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:01:52 compute-0 sudo[267934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:52.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:52 compute-0 sudo[267934]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:53 compute-0 sudo[267959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:01:53 compute-0 sudo[267959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:01:53 compute-0 sudo[267959]: pam_unix(sudo:session): session closed for user root
Jan 22 00:01:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:53.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:54 compute-0 ceph-mon[74318]: pgmap v1307: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 9.4 KiB/s rd, 0 B/s wr, 12 op/s
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009887399962776847 of space, bias 1.0, pg target 0.2966219988833054 quantized to 32 (current 32)
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:01:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:54.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:55.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:56 compute-0 ceph-mon[74318]: pgmap v1308: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:01:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 11 op/s
Jan 22 00:01:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:56.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:01:57 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:01:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:01:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:57.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:01:58 compute-0 ceph-mon[74318]: pgmap v1309: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 11 op/s
Jan 22 00:01:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 11 op/s
Jan 22 00:01:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:01:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:01:58.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:01:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:01:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:01:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:01:59.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:00 compute-0 ceph-mon[74318]: pgmap v1310: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 11 op/s
Jan 22 00:02:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 111 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 393 KiB/s wr, 43 op/s
Jan 22 00:02:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:00.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:01.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:02 compute-0 ceph-mon[74318]: pgmap v1311: 305 pgs: 305 active+clean; 111 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 393 KiB/s wr, 43 op/s
Jan 22 00:02:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 22 00:02:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:02:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:02:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:02.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:02:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:02:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:03.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:02:04 compute-0 ceph-mon[74318]: pgmap v1312: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 22 00:02:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 22 00:02:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:04.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:05.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:06 compute-0 ceph-mon[74318]: pgmap v1313: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 22 00:02:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 22 00:02:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:02:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:06.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:02:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:02:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:07.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:08 compute-0 ceph-mon[74318]: pgmap v1314: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 22 00:02:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 22 00:02:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:08.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:02:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:02:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:02:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:02:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:02:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:02:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:02:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:09.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:02:10 compute-0 ceph-mon[74318]: pgmap v1315: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 22 00:02:10 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1208052697' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:02:10 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1208052697' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:02:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 110 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 22 00:02:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:10.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:11 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:02:11.456 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 00:02:11 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:02:11.458 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 00:02:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Jan 22 00:02:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Jan 22 00:02:11 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Jan 22 00:02:11 compute-0 ceph-mon[74318]: pgmap v1316: 305 pgs: 305 active+clean; 110 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 22 00:02:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:11.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 108 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Jan 22 00:02:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:02:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:12.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:12 compute-0 ceph-mon[74318]: osdmap e169: 3 total, 3 up, 3 in
Jan 22 00:02:13 compute-0 sudo[267994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:02:13 compute-0 sudo[267994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:13 compute-0 sudo[267994]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:13 compute-0 sudo[268025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:02:13 compute-0 sudo[268025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:13 compute-0 sudo[268025]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:13 compute-0 podman[268018]: 2026-01-22 00:02:13.300449608 +0000 UTC m=+0.106446356 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 00:02:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:13.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:14 compute-0 ceph-mon[74318]: pgmap v1318: 305 pgs: 305 active+clean; 108 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Jan 22 00:02:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 108 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Jan 22 00:02:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:02:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:14.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:02:14 compute-0 nova_compute[247516]: 2026-01-22 00:02:14.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:02:14 compute-0 nova_compute[247516]: 2026-01-22 00:02:14.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:02:15 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3351644843' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:02:15 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3351644843' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:02:15 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2289790013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:02:15 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:02:15.462 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 00:02:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:15.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:16 compute-0 ceph-mon[74318]: pgmap v1319: 305 pgs: 305 active+clean; 108 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Jan 22 00:02:16 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/493317993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:02:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 62 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Jan 22 00:02:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:16.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:16 compute-0 nova_compute[247516]: 2026-01-22 00:02:16.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:02:16 compute-0 nova_compute[247516]: 2026-01-22 00:02:16.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:02:16 compute-0 nova_compute[247516]: 2026-01-22 00:02:16.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:02:17 compute-0 nova_compute[247516]: 2026-01-22 00:02:17.019 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:02:17 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:02:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:17.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:18 compute-0 ceph-mon[74318]: pgmap v1320: 305 pgs: 305 active+clean; 62 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Jan 22 00:02:18 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1439565108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:02:18 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3635300735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:02:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 62 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Jan 22 00:02:18 compute-0 sudo[268074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:02:18 compute-0 sudo[268074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:18 compute-0 sudo[268074]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:18 compute-0 sudo[268099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:02:18 compute-0 sudo[268099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:18 compute-0 sudo[268099]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:18 compute-0 sudo[268124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:02:18 compute-0 sudo[268124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:18 compute-0 sudo[268124]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:18 compute-0 sudo[268149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 00:02:18 compute-0 sudo[268149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:18.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:19 compute-0 sudo[268149]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:02:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:02:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:02:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:02:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:02:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:02:19 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev ca1a0a68-3324-48ec-a6e3-5fcc5c98b5a3 does not exist
Jan 22 00:02:19 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 28ed3717-c54a-4f48-821e-62f5ad9b526f does not exist
Jan 22 00:02:19 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 60ddd323-2bda-4e15-8120-e6e46c9d7d62 does not exist
Jan 22 00:02:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:02:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:02:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:02:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:02:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:02:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:02:19 compute-0 sudo[268206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:02:19 compute-0 sudo[268206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:19 compute-0 ceph-mon[74318]: pgmap v1321: 305 pgs: 305 active+clean; 62 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Jan 22 00:02:19 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:02:19 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:02:19 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:02:19 compute-0 sudo[268206]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:19 compute-0 sudo[268231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:02:19 compute-0 sudo[268231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:19 compute-0 sudo[268231]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:02:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:19.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:02:20 compute-0 nova_compute[247516]: 2026-01-22 00:02:20.014 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:02:20 compute-0 sudo[268256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:02:20 compute-0 sudo[268256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:20 compute-0 sudo[268256]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:20 compute-0 sudo[268281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:02:20 compute-0 sudo[268281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:20 compute-0 podman[268347]: 2026-01-22 00:02:20.520179908 +0000 UTC m=+0.040776833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:02:20 compute-0 podman[268347]: 2026-01-22 00:02:20.661755771 +0000 UTC m=+0.182352616 container create 626f5d71401b1ff1c6d084a0a798eb7810be5b54632e5e368b526ae62d8328c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:02:20 compute-0 systemd[1]: Started libpod-conmon-626f5d71401b1ff1c6d084a0a798eb7810be5b54632e5e368b526ae62d8328c8.scope.
Jan 22 00:02:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 62 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Jan 22 00:02:20 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:02:20 compute-0 podman[268347]: 2026-01-22 00:02:20.880208473 +0000 UTC m=+0.400805388 container init 626f5d71401b1ff1c6d084a0a798eb7810be5b54632e5e368b526ae62d8328c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shirley, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:02:20 compute-0 podman[268347]: 2026-01-22 00:02:20.890689757 +0000 UTC m=+0.411286572 container start 626f5d71401b1ff1c6d084a0a798eb7810be5b54632e5e368b526ae62d8328c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shirley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:02:20 compute-0 vigorous_shirley[268363]: 167 167
Jan 22 00:02:20 compute-0 systemd[1]: libpod-626f5d71401b1ff1c6d084a0a798eb7810be5b54632e5e368b526ae62d8328c8.scope: Deactivated successfully.
Jan 22 00:02:20 compute-0 conmon[268363]: conmon 626f5d71401b1ff1c6d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-626f5d71401b1ff1c6d084a0a798eb7810be5b54632e5e368b526ae62d8328c8.scope/container/memory.events
Jan 22 00:02:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:20.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:02:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:02:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:02:21 compute-0 podman[268347]: 2026-01-22 00:02:21.084897898 +0000 UTC m=+0.605494713 container attach 626f5d71401b1ff1c6d084a0a798eb7810be5b54632e5e368b526ae62d8328c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:02:21 compute-0 podman[268347]: 2026-01-22 00:02:21.08687408 +0000 UTC m=+0.607470935 container died 626f5d71401b1ff1c6d084a0a798eb7810be5b54632e5e368b526ae62d8328c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shirley, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:02:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-030c3b40d2fcc12903a33f42c311a396890bcc5903034e06c60a116106faaa30-merged.mount: Deactivated successfully.
Jan 22 00:02:21 compute-0 sshd-session[268395]: error: kex_exchange_identification: read: Connection reset by peer
Jan 22 00:02:21 compute-0 sshd-session[268395]: Connection reset by 176.120.22.52 port 33420
Jan 22 00:02:21 compute-0 podman[268347]: 2026-01-22 00:02:21.913505558 +0000 UTC m=+1.434102423 container remove 626f5d71401b1ff1c6d084a0a798eb7810be5b54632e5e368b526ae62d8328c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:02:21 compute-0 podman[268383]: 2026-01-22 00:02:21.971292417 +0000 UTC m=+0.528258913 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 00:02:21 compute-0 nova_compute[247516]: 2026-01-22 00:02:21.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:02:21 compute-0 systemd[1]: libpod-conmon-626f5d71401b1ff1c6d084a0a798eb7810be5b54632e5e368b526ae62d8328c8.scope: Deactivated successfully.
Jan 22 00:02:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:22.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:22 compute-0 podman[268411]: 2026-01-22 00:02:22.162285299 +0000 UTC m=+0.046059057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:02:22 compute-0 podman[268411]: 2026-01-22 00:02:22.27250493 +0000 UTC m=+0.156278658 container create 861a7f7e86d1fb766fa919668e58714062d656e1ccd5c329eef794c7e4c29c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_satoshi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 00:02:22 compute-0 systemd[1]: Started libpod-conmon-861a7f7e86d1fb766fa919668e58714062d656e1ccd5c329eef794c7e4c29c21.scope.
Jan 22 00:02:22 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8433e700da90e9940995de270ce3e9baf6a3ed43a534619c8e268ad30a9883f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8433e700da90e9940995de270ce3e9baf6a3ed43a534619c8e268ad30a9883f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8433e700da90e9940995de270ce3e9baf6a3ed43a534619c8e268ad30a9883f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8433e700da90e9940995de270ce3e9baf6a3ed43a534619c8e268ad30a9883f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8433e700da90e9940995de270ce3e9baf6a3ed43a534619c8e268ad30a9883f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:02:22 compute-0 podman[268411]: 2026-01-22 00:02:22.673178274 +0000 UTC m=+0.556952022 container init 861a7f7e86d1fb766fa919668e58714062d656e1ccd5c329eef794c7e4c29c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:02:22 compute-0 podman[268411]: 2026-01-22 00:02:22.686357211 +0000 UTC m=+0.570130909 container start 861a7f7e86d1fb766fa919668e58714062d656e1ccd5c329eef794c7e4c29c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_satoshi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:02:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 62 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.3 KiB/s wr, 42 op/s
Jan 22 00:02:22 compute-0 ceph-mon[74318]: pgmap v1322: 305 pgs: 305 active+clean; 62 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Jan 22 00:02:22 compute-0 podman[268411]: 2026-01-22 00:02:22.835522639 +0000 UTC m=+0.719296417 container attach 861a7f7e86d1fb766fa919668e58714062d656e1ccd5c329eef794c7e4c29c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_satoshi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:02:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:02:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:22.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:22 compute-0 nova_compute[247516]: 2026-01-22 00:02:22.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:02:23 compute-0 zen_satoshi[268428]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:02:23 compute-0 zen_satoshi[268428]: --> relative data size: 1.0
Jan 22 00:02:23 compute-0 zen_satoshi[268428]: --> All data devices are unavailable
Jan 22 00:02:23 compute-0 systemd[1]: libpod-861a7f7e86d1fb766fa919668e58714062d656e1ccd5c329eef794c7e4c29c21.scope: Deactivated successfully.
Jan 22 00:02:23 compute-0 podman[268411]: 2026-01-22 00:02:23.649460915 +0000 UTC m=+1.533234613 container died 861a7f7e86d1fb766fa919668e58714062d656e1ccd5c329eef794c7e4c29c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 00:02:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-8433e700da90e9940995de270ce3e9baf6a3ed43a534619c8e268ad30a9883f9-merged.mount: Deactivated successfully.
Jan 22 00:02:23 compute-0 podman[268411]: 2026-01-22 00:02:23.975511218 +0000 UTC m=+1.859284916 container remove 861a7f7e86d1fb766fa919668e58714062d656e1ccd5c329eef794c7e4c29c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 00:02:23 compute-0 systemd[1]: libpod-conmon-861a7f7e86d1fb766fa919668e58714062d656e1ccd5c329eef794c7e4c29c21.scope: Deactivated successfully.
Jan 22 00:02:23 compute-0 nova_compute[247516]: 2026-01-22 00:02:23.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:02:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:24.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:24 compute-0 sudo[268281]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:24 compute-0 sudo[268458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:02:24 compute-0 sudo[268458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:24 compute-0 sudo[268458]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:24 compute-0 ceph-mon[74318]: pgmap v1323: 305 pgs: 305 active+clean; 62 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.3 KiB/s wr, 42 op/s
Jan 22 00:02:24 compute-0 sudo[268483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:02:24 compute-0 sudo[268483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:24 compute-0 sudo[268483]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:24 compute-0 sudo[268508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:02:24 compute-0 sudo[268508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:24 compute-0 sudo[268508]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:24 compute-0 sudo[268533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:02:24 compute-0 sudo[268533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 62 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.9 KiB/s wr, 39 op/s
Jan 22 00:02:24 compute-0 podman[268597]: 2026-01-22 00:02:24.848138821 +0000 UTC m=+0.112111502 container create f18b63ab48702487f7f089e6c69ae15a279682d61c2992bbbcad057e3e4eeee4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_keldysh, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 00:02:24 compute-0 podman[268597]: 2026-01-22 00:02:24.778294429 +0000 UTC m=+0.042267170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:02:24 compute-0 systemd[1]: Started libpod-conmon-f18b63ab48702487f7f089e6c69ae15a279682d61c2992bbbcad057e3e4eeee4.scope.
Jan 22 00:02:24 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:02:24 compute-0 podman[268597]: 2026-01-22 00:02:24.975263896 +0000 UTC m=+0.239236637 container init f18b63ab48702487f7f089e6c69ae15a279682d61c2992bbbcad057e3e4eeee4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 22 00:02:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:24.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:24 compute-0 podman[268597]: 2026-01-22 00:02:24.986948588 +0000 UTC m=+0.250921279 container start f18b63ab48702487f7f089e6c69ae15a279682d61c2992bbbcad057e3e4eeee4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_keldysh, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:02:24 compute-0 ecstatic_keldysh[268613]: 167 167
Jan 22 00:02:24 compute-0 systemd[1]: libpod-f18b63ab48702487f7f089e6c69ae15a279682d61c2992bbbcad057e3e4eeee4.scope: Deactivated successfully.
Jan 22 00:02:25 compute-0 podman[268597]: 2026-01-22 00:02:25.064833639 +0000 UTC m=+0.328806330 container attach f18b63ab48702487f7f089e6c69ae15a279682d61c2992bbbcad057e3e4eeee4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 00:02:25 compute-0 podman[268597]: 2026-01-22 00:02:25.065297383 +0000 UTC m=+0.329270064 container died f18b63ab48702487f7f089e6c69ae15a279682d61c2992bbbcad057e3e4eeee4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_keldysh, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:02:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-873a25f174a86dbcb1aa2764f544209e84dd2a45989f78528d45ee8233f2c87d-merged.mount: Deactivated successfully.
Jan 22 00:02:25 compute-0 podman[268597]: 2026-01-22 00:02:25.871717716 +0000 UTC m=+1.135690377 container remove f18b63ab48702487f7f089e6c69ae15a279682d61c2992bbbcad057e3e4eeee4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_keldysh, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 00:02:25 compute-0 systemd[1]: libpod-conmon-f18b63ab48702487f7f089e6c69ae15a279682d61c2992bbbcad057e3e4eeee4.scope: Deactivated successfully.
Jan 22 00:02:25 compute-0 nova_compute[247516]: 2026-01-22 00:02:25.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:02:25 compute-0 nova_compute[247516]: 2026-01-22 00:02:25.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:02:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:26.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:26 compute-0 nova_compute[247516]: 2026-01-22 00:02:26.025 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:02:26 compute-0 nova_compute[247516]: 2026-01-22 00:02:26.026 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:02:26 compute-0 nova_compute[247516]: 2026-01-22 00:02:26.026 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:02:26 compute-0 nova_compute[247516]: 2026-01-22 00:02:26.027 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:02:26 compute-0 nova_compute[247516]: 2026-01-22 00:02:26.027 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:02:26 compute-0 podman[268640]: 2026-01-22 00:02:26.112993515 +0000 UTC m=+0.064769896 container create d919063a4074de85e6e94e0fe269972c8b2ff9a550293a769b8f519cb46395be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:02:26 compute-0 podman[268640]: 2026-01-22 00:02:26.077230108 +0000 UTC m=+0.029006499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:02:26 compute-0 systemd[1]: Started libpod-conmon-d919063a4074de85e6e94e0fe269972c8b2ff9a550293a769b8f519cb46395be.scope.
Jan 22 00:02:26 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc6ee45add4d663da251ba35e777528363213244cc1a424a16599ead82ce5b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc6ee45add4d663da251ba35e777528363213244cc1a424a16599ead82ce5b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc6ee45add4d663da251ba35e777528363213244cc1a424a16599ead82ce5b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc6ee45add4d663da251ba35e777528363213244cc1a424a16599ead82ce5b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:02:26 compute-0 podman[268640]: 2026-01-22 00:02:26.268252741 +0000 UTC m=+0.220029102 container init d919063a4074de85e6e94e0fe269972c8b2ff9a550293a769b8f519cb46395be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ellis, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 00:02:26 compute-0 podman[268640]: 2026-01-22 00:02:26.280651475 +0000 UTC m=+0.232427826 container start d919063a4074de85e6e94e0fe269972c8b2ff9a550293a769b8f519cb46395be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ellis, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:02:26 compute-0 podman[268640]: 2026-01-22 00:02:26.328378683 +0000 UTC m=+0.280155064 container attach d919063a4074de85e6e94e0fe269972c8b2ff9a550293a769b8f519cb46395be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ellis, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 00:02:26 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:02:26 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2070825576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:02:26 compute-0 nova_compute[247516]: 2026-01-22 00:02:26.495 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:02:26 compute-0 nova_compute[247516]: 2026-01-22 00:02:26.651 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:02:26 compute-0 nova_compute[247516]: 2026-01-22 00:02:26.652 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5088MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:02:26 compute-0 nova_compute[247516]: 2026-01-22 00:02:26.653 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:02:26 compute-0 nova_compute[247516]: 2026-01-22 00:02:26.653 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:02:26 compute-0 ceph-mon[74318]: pgmap v1324: 305 pgs: 305 active+clean; 62 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.9 KiB/s wr, 39 op/s
Jan 22 00:02:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/537866391' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:02:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/537866391' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:02:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2070825576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:02:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 62 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 4.2 KiB/s wr, 54 op/s
Jan 22 00:02:26 compute-0 nova_compute[247516]: 2026-01-22 00:02:26.819 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:02:26 compute-0 nova_compute[247516]: 2026-01-22 00:02:26.820 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:02:26 compute-0 nova_compute[247516]: 2026-01-22 00:02:26.820 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:02:26 compute-0 nova_compute[247516]: 2026-01-22 00:02:26.897 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:02:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:26.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]: {
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:     "1": [
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:         {
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:             "devices": [
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:                 "/dev/loop3"
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:             ],
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:             "lv_name": "ceph_lv0",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:             "lv_size": "7511998464",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:             "name": "ceph_lv0",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:             "tags": {
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:                 "ceph.cluster_name": "ceph",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:                 "ceph.crush_device_class": "",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:                 "ceph.encrypted": "0",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:                 "ceph.osd_id": "1",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:                 "ceph.type": "block",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:                 "ceph.vdo": "0"
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:             },
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:             "type": "block",
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:             "vg_name": "ceph_vg0"
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:         }
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]:     ]
Jan 22 00:02:27 compute-0 beautiful_ellis[268675]: }
Jan 22 00:02:27 compute-0 systemd[1]: libpod-d919063a4074de85e6e94e0fe269972c8b2ff9a550293a769b8f519cb46395be.scope: Deactivated successfully.
Jan 22 00:02:27 compute-0 podman[268640]: 2026-01-22 00:02:27.149360666 +0000 UTC m=+1.101137027 container died d919063a4074de85e6e94e0fe269972c8b2ff9a550293a769b8f519cb46395be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ellis, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 22 00:02:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbc6ee45add4d663da251ba35e777528363213244cc1a424a16599ead82ce5b0-merged.mount: Deactivated successfully.
Jan 22 00:02:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:02:27 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2560028784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:02:27 compute-0 nova_compute[247516]: 2026-01-22 00:02:27.387 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:02:27 compute-0 nova_compute[247516]: 2026-01-22 00:02:27.394 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:02:27 compute-0 nova_compute[247516]: 2026-01-22 00:02:27.444 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:02:27 compute-0 nova_compute[247516]: 2026-01-22 00:02:27.446 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:02:27 compute-0 nova_compute[247516]: 2026-01-22 00:02:27.446 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:02:27 compute-0 podman[268640]: 2026-01-22 00:02:27.486799392 +0000 UTC m=+1.438575763 container remove d919063a4074de85e6e94e0fe269972c8b2ff9a550293a769b8f519cb46395be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ellis, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:02:27 compute-0 systemd[1]: libpod-conmon-d919063a4074de85e6e94e0fe269972c8b2ff9a550293a769b8f519cb46395be.scope: Deactivated successfully.
Jan 22 00:02:27 compute-0 sudo[268533]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:27 compute-0 sudo[268724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:02:27 compute-0 sudo[268724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:27 compute-0 sudo[268724]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:27 compute-0 sudo[268749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:02:27 compute-0 sudo[268749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:27 compute-0 sudo[268749]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:27 compute-0 sudo[268774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:02:27 compute-0 sudo[268774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:27 compute-0 sudo[268774]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:27 compute-0 ceph-mon[74318]: pgmap v1325: 305 pgs: 305 active+clean; 62 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 4.2 KiB/s wr, 54 op/s
Jan 22 00:02:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2560028784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:02:27 compute-0 sudo[268799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:02:27 compute-0 sudo[268799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:28.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:02:28 compute-0 podman[268865]: 2026-01-22 00:02:28.548176338 +0000 UTC m=+0.039412871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:02:28 compute-0 podman[268865]: 2026-01-22 00:02:28.683682452 +0000 UTC m=+0.174918945 container create 160c25f8f175fd563c6295affc7e384a97b91357cb17606d6ac5784a588e8389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_villani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:02:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 62 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 341 B/s wr, 22 op/s
Jan 22 00:02:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:28.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:29 compute-0 systemd[1]: Started libpod-conmon-160c25f8f175fd563c6295affc7e384a97b91357cb17606d6ac5784a588e8389.scope.
Jan 22 00:02:29 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:02:29 compute-0 podman[268865]: 2026-01-22 00:02:29.164491695 +0000 UTC m=+0.655728268 container init 160c25f8f175fd563c6295affc7e384a97b91357cb17606d6ac5784a588e8389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 00:02:29 compute-0 podman[268865]: 2026-01-22 00:02:29.172705721 +0000 UTC m=+0.663942224 container start 160c25f8f175fd563c6295affc7e384a97b91357cb17606d6ac5784a588e8389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:02:29 compute-0 cool_villani[268881]: 167 167
Jan 22 00:02:29 compute-0 systemd[1]: libpod-160c25f8f175fd563c6295affc7e384a97b91357cb17606d6ac5784a588e8389.scope: Deactivated successfully.
Jan 22 00:02:29 compute-0 podman[268865]: 2026-01-22 00:02:29.307907095 +0000 UTC m=+0.799143568 container attach 160c25f8f175fd563c6295affc7e384a97b91357cb17606d6ac5784a588e8389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_villani, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 00:02:29 compute-0 podman[268865]: 2026-01-22 00:02:29.308949388 +0000 UTC m=+0.800185861 container died 160c25f8f175fd563c6295affc7e384a97b91357cb17606d6ac5784a588e8389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 00:02:29 compute-0 nova_compute[247516]: 2026-01-22 00:02:29.448 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:02:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-13e813b02bef73164dacb421a18daca4a4d74954cf9649fbe28120ee4a2cd4fa-merged.mount: Deactivated successfully.
Jan 22 00:02:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:30.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:30 compute-0 podman[268865]: 2026-01-22 00:02:30.047355345 +0000 UTC m=+1.538591808 container remove 160c25f8f175fd563c6295affc7e384a97b91357cb17606d6ac5784a588e8389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:02:30 compute-0 systemd[1]: libpod-conmon-160c25f8f175fd563c6295affc7e384a97b91357cb17606d6ac5784a588e8389.scope: Deactivated successfully.
Jan 22 00:02:30 compute-0 ceph-mon[74318]: pgmap v1326: 305 pgs: 305 active+clean; 62 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 341 B/s wr, 22 op/s
Jan 22 00:02:30 compute-0 podman[268906]: 2026-01-22 00:02:30.280908246 +0000 UTC m=+0.072907308 container create e0f77873e7eded3e1ba001e7213d598953a57e3c190b30f277a2905024ddef29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:02:30 compute-0 podman[268906]: 2026-01-22 00:02:30.243507297 +0000 UTC m=+0.035506409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:02:30 compute-0 systemd[1]: Started libpod-conmon-e0f77873e7eded3e1ba001e7213d598953a57e3c190b30f277a2905024ddef29.scope.
Jan 22 00:02:30 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:02:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b14f65411b9260f04e425412a2d56d58d4bf1e3659c8af1491fa7f0ca61585b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:02:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b14f65411b9260f04e425412a2d56d58d4bf1e3659c8af1491fa7f0ca61585b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:02:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b14f65411b9260f04e425412a2d56d58d4bf1e3659c8af1491fa7f0ca61585b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:02:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b14f65411b9260f04e425412a2d56d58d4bf1e3659c8af1491fa7f0ca61585b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:02:30 compute-0 podman[268906]: 2026-01-22 00:02:30.482258038 +0000 UTC m=+0.274257080 container init e0f77873e7eded3e1ba001e7213d598953a57e3c190b30f277a2905024ddef29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 00:02:30 compute-0 podman[268906]: 2026-01-22 00:02:30.495822168 +0000 UTC m=+0.287821200 container start e0f77873e7eded3e1ba001e7213d598953a57e3c190b30f277a2905024ddef29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mcclintock, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:02:30 compute-0 podman[268906]: 2026-01-22 00:02:30.667685848 +0000 UTC m=+0.459684930 container attach e0f77873e7eded3e1ba001e7213d598953a57e3c190b30f277a2905024ddef29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mcclintock, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 00:02:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 879 KiB/s wr, 53 op/s
Jan 22 00:02:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:02:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:30.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:02:31 compute-0 reverent_mcclintock[268923]: {
Jan 22 00:02:31 compute-0 reverent_mcclintock[268923]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:02:31 compute-0 reverent_mcclintock[268923]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:02:31 compute-0 reverent_mcclintock[268923]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:02:31 compute-0 reverent_mcclintock[268923]:         "osd_id": 1,
Jan 22 00:02:31 compute-0 reverent_mcclintock[268923]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:02:31 compute-0 reverent_mcclintock[268923]:         "type": "bluestore"
Jan 22 00:02:31 compute-0 reverent_mcclintock[268923]:     }
Jan 22 00:02:31 compute-0 reverent_mcclintock[268923]: }
Jan 22 00:02:31 compute-0 systemd[1]: libpod-e0f77873e7eded3e1ba001e7213d598953a57e3c190b30f277a2905024ddef29.scope: Deactivated successfully.
Jan 22 00:02:31 compute-0 podman[268906]: 2026-01-22 00:02:31.446244918 +0000 UTC m=+1.238243940 container died e0f77873e7eded3e1ba001e7213d598953a57e3c190b30f277a2905024ddef29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:02:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-b14f65411b9260f04e425412a2d56d58d4bf1e3659c8af1491fa7f0ca61585b0-merged.mount: Deactivated successfully.
Jan 22 00:02:31 compute-0 podman[268906]: 2026-01-22 00:02:31.682219353 +0000 UTC m=+1.474218415 container remove e0f77873e7eded3e1ba001e7213d598953a57e3c190b30f277a2905024ddef29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mcclintock, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 00:02:31 compute-0 systemd[1]: libpod-conmon-e0f77873e7eded3e1ba001e7213d598953a57e3c190b30f277a2905024ddef29.scope: Deactivated successfully.
Jan 22 00:02:31 compute-0 sudo[268799]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:02:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:02:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:02:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:02:31 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 7396b47e-2279-436c-b0b9-b29b40296661 does not exist
Jan 22 00:02:31 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 34bc8a41-c42a-425e-b11a-dea9234cc614 does not exist
Jan 22 00:02:31 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev dbcd3491-c511-4393-9057-10ca485ea5d8 does not exist
Jan 22 00:02:31 compute-0 sudo[268957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:02:31 compute-0 sudo[268957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:31 compute-0 sudo[268957]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:32.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:32 compute-0 sudo[268982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:02:32 compute-0 sudo[268982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:32 compute-0 sudo[268982]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:32 compute-0 ceph-mon[74318]: pgmap v1327: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 879 KiB/s wr, 53 op/s
Jan 22 00:02:32 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:02:32 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:02:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 95 MiB data, 293 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.3 MiB/s wr, 57 op/s
Jan 22 00:02:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:32.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:33 compute-0 sudo[269007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:02:33 compute-0 sudo[269007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:33 compute-0 sudo[269007]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:02:33 compute-0 sudo[269032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:02:33 compute-0 sudo[269032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:33 compute-0 sudo[269032]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:34.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 95 MiB data, 293 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.3 MiB/s wr, 56 op/s
Jan 22 00:02:34 compute-0 ceph-mon[74318]: pgmap v1328: 305 pgs: 305 active+clean; 95 MiB data, 293 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.3 MiB/s wr, 57 op/s
Jan 22 00:02:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:34.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:36.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:36 compute-0 ceph-mon[74318]: pgmap v1329: 305 pgs: 305 active+clean; 95 MiB data, 293 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.3 MiB/s wr, 56 op/s
Jan 22 00:02:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 201 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 5.3 MiB/s wr, 116 op/s
Jan 22 00:02:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:36.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:38.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:38 compute-0 ceph-mon[74318]: pgmap v1330: 305 pgs: 305 active+clean; 201 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 5.3 MiB/s wr, 116 op/s
Jan 22 00:02:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:02:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 201 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 5.3 MiB/s wr, 101 op/s
Jan 22 00:02:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:02:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:38.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:02:39
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'images', 'volumes']
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:02:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:02:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:40.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 201 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 5.3 MiB/s wr, 101 op/s
Jan 22 00:02:40 compute-0 ceph-mon[74318]: pgmap v1331: 305 pgs: 305 active+clean; 201 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 5.3 MiB/s wr, 101 op/s
Jan 22 00:02:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:02:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:40.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:02:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:02:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:42.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:02:42 compute-0 ceph-mon[74318]: pgmap v1332: 305 pgs: 305 active+clean; 201 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 5.3 MiB/s wr, 101 op/s
Jan 22 00:02:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 201 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 4.5 MiB/s wr, 70 op/s
Jan 22 00:02:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:43.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:02:44 compute-0 podman[269063]: 2026-01-22 00:02:44.026620879 +0000 UTC m=+0.129224932 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 00:02:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:44.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:44 compute-0 ceph-mon[74318]: pgmap v1333: 305 pgs: 305 active+clean; 201 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 4.5 MiB/s wr, 70 op/s
Jan 22 00:02:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 201 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 4.1 MiB/s wr, 60 op/s
Jan 22 00:02:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:45.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:02:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:46.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:02:46 compute-0 ceph-mon[74318]: pgmap v1334: 305 pgs: 305 active+clean; 201 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 4.1 MiB/s wr, 60 op/s
Jan 22 00:02:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 201 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 4.1 MiB/s wr, 60 op/s
Jan 22 00:02:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:47.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:48.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:02:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 201 MiB data, 336 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:02:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:02:48.761 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:02:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:02:48.762 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:02:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:02:48.762 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:02:48 compute-0 ceph-mon[74318]: pgmap v1335: 305 pgs: 305 active+clean; 201 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 4.1 MiB/s wr, 60 op/s
Jan 22 00:02:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:49.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:02:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:50.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:02:50 compute-0 ceph-mon[74318]: pgmap v1336: 305 pgs: 305 active+clean; 201 MiB data, 336 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:02:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 160 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 597 B/s wr, 5 op/s
Jan 22 00:02:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:51.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:51 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/4081075641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:02:51 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/4081075641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:02:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:02:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:52.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:02:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 00:02:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6908 writes, 29K keys, 6907 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6908 writes, 6907 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1574 writes, 6662 keys, 1574 commit groups, 1.0 writes per commit group, ingest: 10.59 MB, 0.02 MB/s
                                           Interval WAL: 1574 writes, 1574 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     97.2      0.40              0.14        17    0.023       0      0       0.0       0.0
                                             L6      1/0    8.46 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.6    124.5    102.2      1.36              0.56        16    0.085     79K   8975       0.0       0.0
                                            Sum      1/0    8.46 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.6     96.3    101.1      1.76              0.70        33    0.053     79K   8975       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6    107.8    110.8      0.44              0.22         8    0.056     23K   2588       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    124.5    102.2      1.36              0.56        16    0.085     79K   8975       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     98.3      0.39              0.14        16    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.038, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.17 GB write, 0.07 MB/s write, 0.17 GB read, 0.07 MB/s read, 1.8 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f1db2f1f0#2 capacity: 304.00 MB usage: 18.18 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000168 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1039,17.55 MB,5.77239%) FilterBlock(34,225.30 KB,0.0723738%) IndexBlock(34,420.34 KB,0.13503%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 00:02:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 154 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 00:02:52 compute-0 ceph-mon[74318]: pgmap v1337: 305 pgs: 305 active+clean; 160 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 597 B/s wr, 5 op/s
Jan 22 00:02:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:53 compute-0 podman[269094]: 2026-01-22 00:02:53.015473453 +0000 UTC m=+0.116162777 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 00:02:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:53.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:02:53 compute-0 sudo[269115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:02:53 compute-0 sudo[269115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:53 compute-0 sudo[269115]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:53 compute-0 sudo[269140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:02:53 compute-0 sudo[269140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:02:53 compute-0 sudo[269140]: pam_unix(sudo:session): session closed for user root
Jan 22 00:02:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:54.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0019800245440163783 of space, bias 1.0, pg target 0.5940073632049135 quantized to 32 (current 32)
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8563869695700725 quantized to 32 (current 32)
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:02:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 154 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 00:02:54 compute-0 ceph-mon[74318]: pgmap v1338: 305 pgs: 305 active+clean; 154 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 00:02:54 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3539956181' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:02:54 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3539956181' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:02:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:55.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:56.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:56 compute-0 ceph-mon[74318]: pgmap v1339: 305 pgs: 305 active+clean; 154 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 00:02:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 108 MiB data, 297 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Jan 22 00:02:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:57.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:02:58.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:02:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 108 MiB data, 297 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Jan 22 00:02:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:02:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:02:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:02:59.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:02:59 compute-0 ceph-mon[74318]: pgmap v1340: 305 pgs: 305 active+clean; 108 MiB data, 297 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Jan 22 00:02:59 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3397641420' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:02:59 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3397641420' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:03:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:00.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:00 compute-0 ceph-mon[74318]: pgmap v1341: 305 pgs: 305 active+clean; 108 MiB data, 297 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Jan 22 00:03:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 70 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 43 KiB/s rd, 24 KiB/s wr, 59 op/s
Jan 22 00:03:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:01.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:02.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:02 compute-0 ceph-mon[74318]: pgmap v1342: 305 pgs: 305 active+clean; 70 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 43 KiB/s rd, 24 KiB/s wr, 59 op/s
Jan 22 00:03:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 23 KiB/s wr, 56 op/s
Jan 22 00:03:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:03.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:03:03 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1517182996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:03:03 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1517182996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:03:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:04.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:04 compute-0 ceph-mon[74318]: pgmap v1343: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 23 KiB/s wr, 56 op/s
Jan 22 00:03:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 23 KiB/s wr, 45 op/s
Jan 22 00:03:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:05.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:05 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3996125445' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:03:05 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3996125445' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:03:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:06.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:06 compute-0 ceph-mon[74318]: pgmap v1344: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 23 KiB/s wr, 45 op/s
Jan 22 00:03:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 47 KiB/s rd, 23 KiB/s wr, 64 op/s
Jan 22 00:03:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:03:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:07.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:03:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 00:03:07 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/283555613' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:03:07 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 00:03:07 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/283555613' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:03:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:08.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:03:08 compute-0 ceph-mon[74318]: pgmap v1345: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 47 KiB/s rd, 23 KiB/s wr, 64 op/s
Jan 22 00:03:08 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/283555613' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:03:08 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/283555613' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:03:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 23 KiB/s wr, 43 op/s
Jan 22 00:03:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:09.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:03:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:03:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:03:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:03:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:03:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:03:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:10.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:10 compute-0 ceph-mon[74318]: pgmap v1346: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 23 KiB/s wr, 43 op/s
Jan 22 00:03:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 23 KiB/s wr, 61 op/s
Jan 22 00:03:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:11.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:11 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:03:11.601 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 00:03:11 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:03:11.605 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 00:03:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:03:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:12.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:03:12 compute-0 ceph-mon[74318]: pgmap v1347: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 23 KiB/s wr, 61 op/s
Jan 22 00:03:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 1.1 KiB/s wr, 47 op/s
Jan 22 00:03:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:13.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:13 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2013518361' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:03:13 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2013518361' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:03:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:03:13 compute-0 sudo[269176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:03:13 compute-0 sudo[269176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:13 compute-0 sudo[269176]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:13 compute-0 sudo[269201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:03:13 compute-0 sudo[269201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:13 compute-0 sudo[269201]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:14.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 1.1 KiB/s wr, 45 op/s
Jan 22 00:03:14 compute-0 ceph-mon[74318]: pgmap v1348: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 1.1 KiB/s wr, 47 op/s
Jan 22 00:03:14 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/582605744' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:03:14 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/582605744' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:03:14 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3406929688' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:03:14 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3406929688' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:03:14 compute-0 nova_compute[247516]: 2026-01-22 00:03:14.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:03:14 compute-0 nova_compute[247516]: 2026-01-22 00:03:14.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:03:15 compute-0 podman[269226]: 2026-01-22 00:03:15.0179411 +0000 UTC m=+0.113915618 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 22 00:03:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:15.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:16.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 00:03:16 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4232194459' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:03:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 00:03:16 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4232194459' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:03:16 compute-0 ceph-mon[74318]: pgmap v1349: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 1.1 KiB/s wr, 45 op/s
Jan 22 00:03:16 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/4232194459' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:03:16 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/4232194459' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:03:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 1.8 KiB/s wr, 79 op/s
Jan 22 00:03:16 compute-0 nova_compute[247516]: 2026-01-22 00:03:16.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:03:16 compute-0 nova_compute[247516]: 2026-01-22 00:03:16.994 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:03:16 compute-0 nova_compute[247516]: 2026-01-22 00:03:16.994 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:03:17 compute-0 nova_compute[247516]: 2026-01-22 00:03:17.006 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:03:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:03:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:17.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:03:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:03:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:18.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:03:18 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1975904921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:03:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:03:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 1.3 KiB/s wr, 60 op/s
Jan 22 00:03:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Jan 22 00:03:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:19.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Jan 22 00:03:19 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Jan 22 00:03:19 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:03:19.607 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 00:03:19 compute-0 ceph-mon[74318]: pgmap v1350: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 1.8 KiB/s wr, 79 op/s
Jan 22 00:03:19 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1341689668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:03:19 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2470596100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:03:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:20.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:20 compute-0 ceph-mon[74318]: pgmap v1351: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 1.3 KiB/s wr, 60 op/s
Jan 22 00:03:20 compute-0 ceph-mon[74318]: osdmap e170: 3 total, 3 up, 3 in
Jan 22 00:03:20 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/463145430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:03:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 2.0 KiB/s wr, 76 op/s
Jan 22 00:03:21 compute-0 nova_compute[247516]: 2026-01-22 00:03:21.000 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:03:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:21.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:21 compute-0 nova_compute[247516]: 2026-01-22 00:03:21.987 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:03:22 compute-0 nova_compute[247516]: 2026-01-22 00:03:22.018 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:03:22 compute-0 ceph-mon[74318]: pgmap v1353: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 2.0 KiB/s wr, 76 op/s
Jan 22 00:03:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:22.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 2.0 KiB/s wr, 66 op/s
Jan 22 00:03:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:03:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:23.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:03:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:03:23 compute-0 podman[269258]: 2026-01-22 00:03:23.971106481 +0000 UTC m=+0.088163941 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 22 00:03:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:24.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:24 compute-0 ceph-mon[74318]: pgmap v1354: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 2.0 KiB/s wr, 66 op/s
Jan 22 00:03:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 2.0 KiB/s wr, 66 op/s
Jan 22 00:03:24 compute-0 nova_compute[247516]: 2026-01-22 00:03:24.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:03:24 compute-0 nova_compute[247516]: 2026-01-22 00:03:24.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:03:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:25.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:26.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:26 compute-0 ceph-mon[74318]: pgmap v1355: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 2.0 KiB/s wr, 66 op/s
Jan 22 00:03:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1174422771' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:03:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1174422771' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:03:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 1.7 KiB/s wr, 33 op/s
Jan 22 00:03:26 compute-0 nova_compute[247516]: 2026-01-22 00:03:26.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:03:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:03:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:27.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:03:27 compute-0 nova_compute[247516]: 2026-01-22 00:03:27.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:03:28 compute-0 nova_compute[247516]: 2026-01-22 00:03:28.040 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:03:28 compute-0 nova_compute[247516]: 2026-01-22 00:03:28.041 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:03:28 compute-0 nova_compute[247516]: 2026-01-22 00:03:28.041 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:03:28 compute-0 nova_compute[247516]: 2026-01-22 00:03:28.042 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:03:28 compute-0 nova_compute[247516]: 2026-01-22 00:03:28.043 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:03:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:03:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:28.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:03:28 compute-0 ceph-mon[74318]: pgmap v1356: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 1.7 KiB/s wr, 33 op/s
Jan 22 00:03:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:03:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Jan 22 00:03:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:03:28 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3017678341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:03:28 compute-0 nova_compute[247516]: 2026-01-22 00:03:28.511 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:03:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Jan 22 00:03:28 compute-0 nova_compute[247516]: 2026-01-22 00:03:28.645 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:03:28 compute-0 nova_compute[247516]: 2026-01-22 00:03:28.646 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5200MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:03:28 compute-0 nova_compute[247516]: 2026-01-22 00:03:28.646 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:03:28 compute-0 nova_compute[247516]: 2026-01-22 00:03:28.647 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:03:28 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Jan 22 00:03:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 1.8 KiB/s wr, 34 op/s
Jan 22 00:03:28 compute-0 nova_compute[247516]: 2026-01-22 00:03:28.848 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:03:28 compute-0 nova_compute[247516]: 2026-01-22 00:03:28.893 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance 3a013437-7f37-41b6-9c4b-c91bc3c935d0 has allocations against this compute host but is not found in the database.
Jan 22 00:03:28 compute-0 nova_compute[247516]: 2026-01-22 00:03:28.924 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance 9ca72661-fbe5-4048-9b57-97e05da80296 has allocations against this compute host but is not found in the database.
Jan 22 00:03:28 compute-0 nova_compute[247516]: 2026-01-22 00:03:28.925 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:03:28 compute-0 nova_compute[247516]: 2026-01-22 00:03:28.925 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:03:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:03:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:29.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:03:29 compute-0 nova_compute[247516]: 2026-01-22 00:03:29.069 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:03:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:03:29 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1075908549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:03:29 compute-0 nova_compute[247516]: 2026-01-22 00:03:29.620 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:03:29 compute-0 nova_compute[247516]: 2026-01-22 00:03:29.627 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:03:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3017678341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:03:29 compute-0 ceph-mon[74318]: osdmap e171: 3 total, 3 up, 3 in
Jan 22 00:03:29 compute-0 nova_compute[247516]: 2026-01-22 00:03:29.711 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:03:29 compute-0 nova_compute[247516]: 2026-01-22 00:03:29.713 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:03:29 compute-0 nova_compute[247516]: 2026-01-22 00:03:29.714 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:03:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:03:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:30.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:03:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail; 5.4 KiB/s rd, 614 B/s wr, 8 op/s
Jan 22 00:03:30 compute-0 ceph-mon[74318]: pgmap v1358: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 1.8 KiB/s wr, 34 op/s
Jan 22 00:03:30 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1075908549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:03:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:31.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:31 compute-0 nova_compute[247516]: 2026-01-22 00:03:31.715 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:03:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:32.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:32 compute-0 sudo[269326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:03:32 compute-0 sudo[269326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:32 compute-0 sudo[269326]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:32 compute-0 ceph-mon[74318]: pgmap v1359: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail; 5.4 KiB/s rd, 614 B/s wr, 8 op/s
Jan 22 00:03:32 compute-0 sudo[269351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:03:32 compute-0 sudo[269351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:32 compute-0 sudo[269351]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:32 compute-0 sudo[269376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:03:32 compute-0 sudo[269376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:32 compute-0 sudo[269376]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail; 4.6 KiB/s rd, 614 B/s wr, 6 op/s
Jan 22 00:03:32 compute-0 sudo[269401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 00:03:32 compute-0 sudo[269401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:33.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:33 compute-0 sudo[269401]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:03:33 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:03:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:03:33 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:03:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:03:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:03:33 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:03:33 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 29eb22b9-1a5c-4511-82f0-92d6c3c60481 does not exist
Jan 22 00:03:33 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev bd843cf4-608e-42c1-915a-3c57d35c7542 does not exist
Jan 22 00:03:33 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 681cc16f-0f48-4e3a-9eaa-69badcc8aee8 does not exist
Jan 22 00:03:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:03:33 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:03:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:03:33 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:03:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:03:33 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:03:33 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:03:33 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:03:33 compute-0 sudo[269457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:03:33 compute-0 sudo[269457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:33 compute-0 sudo[269457]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:33 compute-0 sudo[269482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:03:33 compute-0 sudo[269482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:33 compute-0 sudo[269482]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:33 compute-0 sudo[269487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:03:33 compute-0 sudo[269487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:33 compute-0 sudo[269487]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:33 compute-0 sudo[269531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:03:33 compute-0 sudo[269531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:33 compute-0 sudo[269531]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:34 compute-0 sudo[269537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:03:34 compute-0 sudo[269537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:34 compute-0 sudo[269537]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:34 compute-0 sudo[269580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:03:34 compute-0 sudo[269580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:03:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:34.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:03:34 compute-0 podman[269648]: 2026-01-22 00:03:34.454938194 +0000 UTC m=+0.044631502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:03:34 compute-0 podman[269648]: 2026-01-22 00:03:34.625978908 +0000 UTC m=+0.215672156 container create c4f0b30cad60a0e1b3a2a6fccfb8541ddaa582d9457b86494cd4ae101129bfd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_snyder, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 00:03:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail; 4.6 KiB/s rd, 614 B/s wr, 6 op/s
Jan 22 00:03:34 compute-0 systemd[1]: Started libpod-conmon-c4f0b30cad60a0e1b3a2a6fccfb8541ddaa582d9457b86494cd4ae101129bfd3.scope.
Jan 22 00:03:34 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:03:35 compute-0 ceph-mon[74318]: pgmap v1360: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail; 4.6 KiB/s rd, 614 B/s wr, 6 op/s
Jan 22 00:03:35 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:03:35 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:03:35 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:03:35 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:03:35 compute-0 podman[269648]: 2026-01-22 00:03:35.061455768 +0000 UTC m=+0.651149056 container init c4f0b30cad60a0e1b3a2a6fccfb8541ddaa582d9457b86494cd4ae101129bfd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 00:03:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:35.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:35 compute-0 podman[269648]: 2026-01-22 00:03:35.074192293 +0000 UTC m=+0.663885491 container start c4f0b30cad60a0e1b3a2a6fccfb8541ddaa582d9457b86494cd4ae101129bfd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_snyder, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 22 00:03:35 compute-0 thirsty_snyder[269665]: 167 167
Jan 22 00:03:35 compute-0 systemd[1]: libpod-c4f0b30cad60a0e1b3a2a6fccfb8541ddaa582d9457b86494cd4ae101129bfd3.scope: Deactivated successfully.
Jan 22 00:03:35 compute-0 podman[269648]: 2026-01-22 00:03:35.240358977 +0000 UTC m=+0.830052185 container attach c4f0b30cad60a0e1b3a2a6fccfb8541ddaa582d9457b86494cd4ae101129bfd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 00:03:35 compute-0 podman[269648]: 2026-01-22 00:03:35.241400789 +0000 UTC m=+0.831093997 container died c4f0b30cad60a0e1b3a2a6fccfb8541ddaa582d9457b86494cd4ae101129bfd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_snyder, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:03:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-07131cc4c0a7c168c5a10e301e95eac5051a40f980c05887229435e21df5abcc-merged.mount: Deactivated successfully.
Jan 22 00:03:35 compute-0 podman[269648]: 2026-01-22 00:03:35.34706794 +0000 UTC m=+0.936761148 container remove c4f0b30cad60a0e1b3a2a6fccfb8541ddaa582d9457b86494cd4ae101129bfd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 00:03:35 compute-0 systemd[1]: libpod-conmon-c4f0b30cad60a0e1b3a2a6fccfb8541ddaa582d9457b86494cd4ae101129bfd3.scope: Deactivated successfully.
Jan 22 00:03:35 compute-0 podman[269691]: 2026-01-22 00:03:35.601770394 +0000 UTC m=+0.066511539 container create 9fa3f51ea2412acbb03ccd4288b3074ad26bb1c84945f9b6083f11373e0f0b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:03:35 compute-0 systemd[1]: Started libpod-conmon-9fa3f51ea2412acbb03ccd4288b3074ad26bb1c84945f9b6083f11373e0f0b63.scope.
Jan 22 00:03:35 compute-0 podman[269691]: 2026-01-22 00:03:35.577643518 +0000 UTC m=+0.042384643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:03:35 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc95be765653025228133cbf4da78acbe0b524484ab6238827caacd335d3da7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc95be765653025228133cbf4da78acbe0b524484ab6238827caacd335d3da7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc95be765653025228133cbf4da78acbe0b524484ab6238827caacd335d3da7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc95be765653025228133cbf4da78acbe0b524484ab6238827caacd335d3da7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc95be765653025228133cbf4da78acbe0b524484ab6238827caacd335d3da7e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:03:35 compute-0 podman[269691]: 2026-01-22 00:03:35.719581751 +0000 UTC m=+0.184322906 container init 9fa3f51ea2412acbb03ccd4288b3074ad26bb1c84945f9b6083f11373e0f0b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_margulis, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 00:03:35 compute-0 podman[269691]: 2026-01-22 00:03:35.73602098 +0000 UTC m=+0.200762115 container start 9fa3f51ea2412acbb03ccd4288b3074ad26bb1c84945f9b6083f11373e0f0b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_margulis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 00:03:35 compute-0 podman[269691]: 2026-01-22 00:03:35.741412417 +0000 UTC m=+0.206153552 container attach 9fa3f51ea2412acbb03ccd4288b3074ad26bb1c84945f9b6083f11373e0f0b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_margulis, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 00:03:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:03:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:36.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:03:36 compute-0 ceph-mon[74318]: pgmap v1361: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail; 4.6 KiB/s rd, 614 B/s wr, 6 op/s
Jan 22 00:03:36 compute-0 eager_margulis[269708]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:03:36 compute-0 eager_margulis[269708]: --> relative data size: 1.0
Jan 22 00:03:36 compute-0 eager_margulis[269708]: --> All data devices are unavailable
Jan 22 00:03:36 compute-0 systemd[1]: libpod-9fa3f51ea2412acbb03ccd4288b3074ad26bb1c84945f9b6083f11373e0f0b63.scope: Deactivated successfully.
Jan 22 00:03:36 compute-0 podman[269691]: 2026-01-22 00:03:36.616131795 +0000 UTC m=+1.080872940 container died 9fa3f51ea2412acbb03ccd4288b3074ad26bb1c84945f9b6083f11373e0f0b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 00:03:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc95be765653025228133cbf4da78acbe0b524484ab6238827caacd335d3da7e-merged.mount: Deactivated successfully.
Jan 22 00:03:36 compute-0 podman[269691]: 2026-01-22 00:03:36.685266535 +0000 UTC m=+1.150007680 container remove 9fa3f51ea2412acbb03ccd4288b3074ad26bb1c84945f9b6083f11373e0f0b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_margulis, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 00:03:36 compute-0 systemd[1]: libpod-conmon-9fa3f51ea2412acbb03ccd4288b3074ad26bb1c84945f9b6083f11373e0f0b63.scope: Deactivated successfully.
Jan 22 00:03:36 compute-0 sudo[269580]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:36 compute-0 sudo[269735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:03:36 compute-0 sudo[269735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:36 compute-0 sudo[269735]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:36 compute-0 sudo[269760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:03:36 compute-0 sudo[269760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:36 compute-0 sudo[269760]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:37 compute-0 sudo[269785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:03:37 compute-0 sudo[269785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:37 compute-0 sudo[269785]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:37.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:37 compute-0 sudo[269810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:03:37 compute-0 sudo[269810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:37 compute-0 podman[269877]: 2026-01-22 00:03:37.550071215 +0000 UTC m=+0.055110227 container create ae68d7f2f0cf4618261004dbf73f223441ec962fb66fb6a580c431b133a0de35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 00:03:37 compute-0 systemd[1]: Started libpod-conmon-ae68d7f2f0cf4618261004dbf73f223441ec962fb66fb6a580c431b133a0de35.scope.
Jan 22 00:03:37 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:03:37 compute-0 podman[269877]: 2026-01-22 00:03:37.525298448 +0000 UTC m=+0.030337550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:03:37 compute-0 podman[269877]: 2026-01-22 00:03:37.626270224 +0000 UTC m=+0.131309256 container init ae68d7f2f0cf4618261004dbf73f223441ec962fb66fb6a580c431b133a0de35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 00:03:37 compute-0 podman[269877]: 2026-01-22 00:03:37.631535927 +0000 UTC m=+0.136574939 container start ae68d7f2f0cf4618261004dbf73f223441ec962fb66fb6a580c431b133a0de35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hugle, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:03:37 compute-0 podman[269877]: 2026-01-22 00:03:37.635003175 +0000 UTC m=+0.140042227 container attach ae68d7f2f0cf4618261004dbf73f223441ec962fb66fb6a580c431b133a0de35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 22 00:03:37 compute-0 clever_hugle[269893]: 167 167
Jan 22 00:03:37 compute-0 systemd[1]: libpod-ae68d7f2f0cf4618261004dbf73f223441ec962fb66fb6a580c431b133a0de35.scope: Deactivated successfully.
Jan 22 00:03:37 compute-0 podman[269877]: 2026-01-22 00:03:37.638923166 +0000 UTC m=+0.143962218 container died ae68d7f2f0cf4618261004dbf73f223441ec962fb66fb6a580c431b133a0de35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hugle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 00:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f28303c1a9df64caa0efe002f77cea008f9e3273df1dbbd13d47f5cc68ed1aa-merged.mount: Deactivated successfully.
Jan 22 00:03:37 compute-0 podman[269877]: 2026-01-22 00:03:37.680317597 +0000 UTC m=+0.185356619 container remove ae68d7f2f0cf4618261004dbf73f223441ec962fb66fb6a580c431b133a0de35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hugle, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:03:37 compute-0 systemd[1]: libpod-conmon-ae68d7f2f0cf4618261004dbf73f223441ec962fb66fb6a580c431b133a0de35.scope: Deactivated successfully.
Jan 22 00:03:37 compute-0 podman[269918]: 2026-01-22 00:03:37.866937564 +0000 UTC m=+0.055997534 container create abc741ad6864cc3abe5934875463545f14f97fe4b0e5347fac41599149372d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kare, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 00:03:37 compute-0 systemd[1]: Started libpod-conmon-abc741ad6864cc3abe5934875463545f14f97fe4b0e5347fac41599149372d42.scope.
Jan 22 00:03:37 compute-0 podman[269918]: 2026-01-22 00:03:37.839804764 +0000 UTC m=+0.028864744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:03:37 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32307e61398e0c51b820467886617a414dc2e34041512a6d8753339d80d96ca8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32307e61398e0c51b820467886617a414dc2e34041512a6d8753339d80d96ca8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32307e61398e0c51b820467886617a414dc2e34041512a6d8753339d80d96ca8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32307e61398e0c51b820467886617a414dc2e34041512a6d8753339d80d96ca8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:03:37 compute-0 podman[269918]: 2026-01-22 00:03:37.978076414 +0000 UTC m=+0.167136434 container init abc741ad6864cc3abe5934875463545f14f97fe4b0e5347fac41599149372d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 00:03:37 compute-0 podman[269918]: 2026-01-22 00:03:37.989984553 +0000 UTC m=+0.179044513 container start abc741ad6864cc3abe5934875463545f14f97fe4b0e5347fac41599149372d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kare, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 00:03:37 compute-0 podman[269918]: 2026-01-22 00:03:37.994524784 +0000 UTC m=+0.183584754 container attach abc741ad6864cc3abe5934875463545f14f97fe4b0e5347fac41599149372d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:03:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:38.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:38 compute-0 ceph-mon[74318]: pgmap v1362: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:03:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:38 compute-0 quizzical_kare[269934]: {
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:     "1": [
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:         {
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:             "devices": [
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:                 "/dev/loop3"
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:             ],
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:             "lv_name": "ceph_lv0",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:             "lv_size": "7511998464",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:             "name": "ceph_lv0",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:             "tags": {
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:                 "ceph.cluster_name": "ceph",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:                 "ceph.crush_device_class": "",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:                 "ceph.encrypted": "0",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:                 "ceph.osd_id": "1",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:                 "ceph.type": "block",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:                 "ceph.vdo": "0"
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:             },
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:             "type": "block",
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:             "vg_name": "ceph_vg0"
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:         }
Jan 22 00:03:38 compute-0 quizzical_kare[269934]:     ]
Jan 22 00:03:38 compute-0 quizzical_kare[269934]: }
Jan 22 00:03:38 compute-0 systemd[1]: libpod-abc741ad6864cc3abe5934875463545f14f97fe4b0e5347fac41599149372d42.scope: Deactivated successfully.
Jan 22 00:03:38 compute-0 podman[269918]: 2026-01-22 00:03:38.83074734 +0000 UTC m=+1.019807310 container died abc741ad6864cc3abe5934875463545f14f97fe4b0e5347fac41599149372d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kare, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 00:03:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-32307e61398e0c51b820467886617a414dc2e34041512a6d8753339d80d96ca8-merged.mount: Deactivated successfully.
Jan 22 00:03:38 compute-0 podman[269918]: 2026-01-22 00:03:38.910543029 +0000 UTC m=+1.099602969 container remove abc741ad6864cc3abe5934875463545f14f97fe4b0e5347fac41599149372d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kare, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:03:38 compute-0 systemd[1]: libpod-conmon-abc741ad6864cc3abe5934875463545f14f97fe4b0e5347fac41599149372d42.scope: Deactivated successfully.
Jan 22 00:03:38 compute-0 sudo[269810]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:39 compute-0 sudo[269955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:03:39 compute-0 sudo[269955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:39.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:39 compute-0 sudo[269955]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:39 compute-0 sudo[269980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:03:39 compute-0 sudo[269980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:39 compute-0 sudo[269980]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:39 compute-0 sudo[270005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:03:39 compute-0 sudo[270005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:39 compute-0 sudo[270005]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:03:39
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'cephfs.cephfs.data']
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:03:39 compute-0 sudo[270030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:03:39 compute-0 sudo[270030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:03:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:03:39 compute-0 podman[270098]: 2026-01-22 00:03:39.74980091 +0000 UTC m=+0.069567435 container create 87ec216d02203d447647d3e729573e8264651f928f58414b2556ac65e0b433c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 00:03:39 compute-0 systemd[1]: Started libpod-conmon-87ec216d02203d447647d3e729573e8264651f928f58414b2556ac65e0b433c7.scope.
Jan 22 00:03:39 compute-0 podman[270098]: 2026-01-22 00:03:39.725896529 +0000 UTC m=+0.045663054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:03:39 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:03:39 compute-0 podman[270098]: 2026-01-22 00:03:39.848464753 +0000 UTC m=+0.168231348 container init 87ec216d02203d447647d3e729573e8264651f928f58414b2556ac65e0b433c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tharp, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 00:03:39 compute-0 podman[270098]: 2026-01-22 00:03:39.85578011 +0000 UTC m=+0.175546625 container start 87ec216d02203d447647d3e729573e8264651f928f58414b2556ac65e0b433c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tharp, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:03:39 compute-0 crazy_tharp[270114]: 167 167
Jan 22 00:03:39 compute-0 podman[270098]: 2026-01-22 00:03:39.861151927 +0000 UTC m=+0.180918502 container attach 87ec216d02203d447647d3e729573e8264651f928f58414b2556ac65e0b433c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tharp, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:03:39 compute-0 systemd[1]: libpod-87ec216d02203d447647d3e729573e8264651f928f58414b2556ac65e0b433c7.scope: Deactivated successfully.
Jan 22 00:03:39 compute-0 podman[270119]: 2026-01-22 00:03:39.908976837 +0000 UTC m=+0.033144677 container died 87ec216d02203d447647d3e729573e8264651f928f58414b2556ac65e0b433c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:03:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-90c4b3fe395ededf4e9f872a00d0aff446be512b7e0b2f3681a075fbedb56502-merged.mount: Deactivated successfully.
Jan 22 00:03:39 compute-0 podman[270119]: 2026-01-22 00:03:39.948749468 +0000 UTC m=+0.072917268 container remove 87ec216d02203d447647d3e729573e8264651f928f58414b2556ac65e0b433c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tharp, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:03:39 compute-0 systemd[1]: libpod-conmon-87ec216d02203d447647d3e729573e8264651f928f58414b2556ac65e0b433c7.scope: Deactivated successfully.
Jan 22 00:03:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:40.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:40 compute-0 podman[270141]: 2026-01-22 00:03:40.146749657 +0000 UTC m=+0.037064868 container create 1aefed7ccd71104c701e1696b7b4ec22eab1d91ff899c648cc7767f61971b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 00:03:40 compute-0 systemd[1]: Started libpod-conmon-1aefed7ccd71104c701e1696b7b4ec22eab1d91ff899c648cc7767f61971b263.scope.
Jan 22 00:03:40 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd471111418d3fda14e45b9015bf230b08f91b1533302df8d0c69994dcf2c8a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd471111418d3fda14e45b9015bf230b08f91b1533302df8d0c69994dcf2c8a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd471111418d3fda14e45b9015bf230b08f91b1533302df8d0c69994dcf2c8a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd471111418d3fda14e45b9015bf230b08f91b1533302df8d0c69994dcf2c8a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:03:40 compute-0 podman[270141]: 2026-01-22 00:03:40.131768713 +0000 UTC m=+0.022083974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:03:40 compute-0 podman[270141]: 2026-01-22 00:03:40.227935651 +0000 UTC m=+0.118250942 container init 1aefed7ccd71104c701e1696b7b4ec22eab1d91ff899c648cc7767f61971b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_margulis, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:03:40 compute-0 podman[270141]: 2026-01-22 00:03:40.244605127 +0000 UTC m=+0.134920338 container start 1aefed7ccd71104c701e1696b7b4ec22eab1d91ff899c648cc7767f61971b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_margulis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 22 00:03:40 compute-0 podman[270141]: 2026-01-22 00:03:40.248459195 +0000 UTC m=+0.138774486 container attach 1aefed7ccd71104c701e1696b7b4ec22eab1d91ff899c648cc7767f61971b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_margulis, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 00:03:40 compute-0 ceph-mon[74318]: pgmap v1363: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:03:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:41.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:03:41 compute-0 lucid_margulis[270158]: {
Jan 22 00:03:41 compute-0 lucid_margulis[270158]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:03:41 compute-0 lucid_margulis[270158]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:03:41 compute-0 lucid_margulis[270158]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:03:41 compute-0 lucid_margulis[270158]:         "osd_id": 1,
Jan 22 00:03:41 compute-0 lucid_margulis[270158]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:03:41 compute-0 lucid_margulis[270158]:         "type": "bluestore"
Jan 22 00:03:41 compute-0 lucid_margulis[270158]:     }
Jan 22 00:03:41 compute-0 lucid_margulis[270158]: }
Jan 22 00:03:41 compute-0 systemd[1]: libpod-1aefed7ccd71104c701e1696b7b4ec22eab1d91ff899c648cc7767f61971b263.scope: Deactivated successfully.
Jan 22 00:03:41 compute-0 podman[270141]: 2026-01-22 00:03:41.138453726 +0000 UTC m=+1.028769007 container died 1aefed7ccd71104c701e1696b7b4ec22eab1d91ff899c648cc7767f61971b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_margulis, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 00:03:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd471111418d3fda14e45b9015bf230b08f91b1533302df8d0c69994dcf2c8a5-merged.mount: Deactivated successfully.
Jan 22 00:03:41 compute-0 podman[270141]: 2026-01-22 00:03:41.196012728 +0000 UTC m=+1.086327939 container remove 1aefed7ccd71104c701e1696b7b4ec22eab1d91ff899c648cc7767f61971b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 00:03:41 compute-0 systemd[1]: libpod-conmon-1aefed7ccd71104c701e1696b7b4ec22eab1d91ff899c648cc7767f61971b263.scope: Deactivated successfully.
Jan 22 00:03:41 compute-0 sudo[270030]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:03:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:03:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:03:41 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:03:41 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 0f6d207f-db33-4053-ac71-26882db42670 does not exist
Jan 22 00:03:41 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 840020b8-5b87-4e7b-b8d6-ceb37a0e1cc3 does not exist
Jan 22 00:03:41 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev bf0111b1-f623-48a0-b124-42c53ff16b26 does not exist
Jan 22 00:03:41 compute-0 sudo[270192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:03:41 compute-0 sudo[270192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:41 compute-0 sudo[270192]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:41 compute-0 sudo[270217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:03:41 compute-0 sudo[270217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:41 compute-0 sudo[270217]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:42.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:42 compute-0 ceph-mon[74318]: pgmap v1364: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:42 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:03:42 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:03:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:43.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:03:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:44.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:44 compute-0 ceph-mon[74318]: pgmap v1365: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:45.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:46 compute-0 podman[270245]: 2026-01-22 00:03:46.020971079 +0000 UTC m=+0.122708811 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 22 00:03:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:46.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:46 compute-0 ceph-mon[74318]: pgmap v1366: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:03:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:47.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:03:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:48.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:48 compute-0 ceph-mon[74318]: pgmap v1367: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:03:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:03:48.764 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:03:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:03:48.767 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:03:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:03:48.767 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:03:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:03:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:49.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:03:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:03:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:50.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:03:50 compute-0 ceph-mon[74318]: pgmap v1368: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:51.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:52.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:52 compute-0 ceph-mon[74318]: pgmap v1369: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:53.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:03:54 compute-0 sudo[270277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:03:54 compute-0 sudo[270277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:03:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:54.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:03:54 compute-0 sudo[270277]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:54 compute-0 podman[270301]: 2026-01-22 00:03:54.224948677 +0000 UTC m=+0.070506224 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 00:03:54 compute-0 sudo[270308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:03:54 compute-0 sudo[270308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:03:54 compute-0 sudo[270308]: pam_unix(sudo:session): session closed for user root
Jan 22 00:03:54 compute-0 ceph-mon[74318]: pgmap v1370: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.217749627768472e-05 of space, bias 1.0, pg target 0.003653248883305416 quantized to 32 (current 32)
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:03:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:55.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:56.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:56 compute-0 ceph-mon[74318]: pgmap v1371: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:03:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:57.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:03:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:03:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:03:58.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:03:58 compute-0 ceph-mon[74318]: pgmap v1372: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:03:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:03:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:03:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:03:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:03:59.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:04:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:00.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:04:00 compute-0 ceph-mon[74318]: pgmap v1373: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:04:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:01.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:04:01 compute-0 anacron[30932]: Job `cron.monthly' started
Jan 22 00:04:01 compute-0 anacron[30932]: Job `cron.monthly' terminated
Jan 22 00:04:01 compute-0 anacron[30932]: Normal exit (3 jobs run)
Jan 22 00:04:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:02.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:02 compute-0 ceph-mon[74318]: pgmap v1374: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:03.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:04:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:04.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:04 compute-0 ceph-mon[74318]: pgmap v1375: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:05.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:06.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:06 compute-0 ceph-mon[74318]: pgmap v1376: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:07.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:08.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:08 compute-0 ceph-mon[74318]: pgmap v1377: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:04:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:09.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:04:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:04:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:04:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:04:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:04:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:04:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:10.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Jan 22 00:04:10 compute-0 ceph-mon[74318]: pgmap v1378: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Jan 22 00:04:10 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Jan 22 00:04:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail; 307 B/s rd, 204 B/s wr, 0 op/s
Jan 22 00:04:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:11.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:11 compute-0 ceph-mon[74318]: osdmap e172: 3 total, 3 up, 3 in
Jan 22 00:04:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:04:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:12.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:04:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 50 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 820 KiB/s wr, 6 op/s
Jan 22 00:04:12 compute-0 ceph-mon[74318]: pgmap v1380: 305 pgs: 305 active+clean; 42 MiB data, 252 MiB used, 21 GiB / 21 GiB avail; 307 B/s rd, 204 B/s wr, 0 op/s
Jan 22 00:04:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:13.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:13 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:04:13.108 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 00:04:13 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:04:13.111 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 00:04:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:04:13 compute-0 ceph-mon[74318]: pgmap v1381: 305 pgs: 305 active+clean; 50 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 820 KiB/s wr, 6 op/s
Jan 22 00:04:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:14.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:14 compute-0 sudo[270358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:04:14 compute-0 sudo[270358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:14 compute-0 sudo[270358]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:14 compute-0 sudo[270383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:04:14 compute-0 sudo[270383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:14 compute-0 sudo[270383]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 50 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 820 KiB/s wr, 6 op/s
Jan 22 00:04:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:15.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:15 compute-0 ceph-mon[74318]: pgmap v1382: 305 pgs: 305 active+clean; 50 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 820 KiB/s wr, 6 op/s
Jan 22 00:04:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:16.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 22 00:04:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Jan 22 00:04:16 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Jan 22 00:04:16 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Jan 22 00:04:16 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1368775510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:04:16 compute-0 nova_compute[247516]: 2026-01-22 00:04:16.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:04:16 compute-0 nova_compute[247516]: 2026-01-22 00:04:16.994 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:04:16 compute-0 nova_compute[247516]: 2026-01-22 00:04:16.995 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:04:17 compute-0 nova_compute[247516]: 2026-01-22 00:04:17.024 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:04:17 compute-0 nova_compute[247516]: 2026-01-22 00:04:17.025 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:04:17 compute-0 nova_compute[247516]: 2026-01-22 00:04:17.026 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:04:17 compute-0 podman[270409]: 2026-01-22 00:04:17.046193217 +0000 UTC m=+0.145026787 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 22 00:04:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:17.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:17 compute-0 ceph-mon[74318]: pgmap v1383: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 22 00:04:17 compute-0 ceph-mon[74318]: osdmap e173: 3 total, 3 up, 3 in
Jan 22 00:04:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:18.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:04:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 2.5 MiB/s wr, 22 op/s
Jan 22 00:04:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Jan 22 00:04:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Jan 22 00:04:18 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Jan 22 00:04:19 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/4273191072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:04:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:19.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:20 compute-0 ceph-mon[74318]: pgmap v1385: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 2.5 MiB/s wr, 22 op/s
Jan 22 00:04:20 compute-0 ceph-mon[74318]: osdmap e174: 3 total, 3 up, 3 in
Jan 22 00:04:20 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3790979856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:04:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:20.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 70 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 46 KiB/s rd, 4.1 MiB/s wr, 62 op/s
Jan 22 00:04:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/588731154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:04:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:21.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:22 compute-0 ceph-mon[74318]: pgmap v1387: 305 pgs: 305 active+clean; 70 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 46 KiB/s rd, 4.1 MiB/s wr, 62 op/s
Jan 22 00:04:22 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:04:22.113 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 00:04:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:22.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 4.1 MiB/s wr, 70 op/s
Jan 22 00:04:23 compute-0 nova_compute[247516]: 2026-01-22 00:04:23.020 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:04:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:23.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:04:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Jan 22 00:04:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Jan 22 00:04:23 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Jan 22 00:04:23 compute-0 nova_compute[247516]: 2026-01-22 00:04:23.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:04:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:24.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:24 compute-0 ceph-mon[74318]: pgmap v1388: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 4.1 MiB/s wr, 70 op/s
Jan 22 00:04:24 compute-0 ceph-mon[74318]: osdmap e175: 3 total, 3 up, 3 in
Jan 22 00:04:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 2.6 MiB/s wr, 56 op/s
Jan 22 00:04:24 compute-0 podman[270438]: 2026-01-22 00:04:24.975345887 +0000 UTC m=+0.077264149 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 00:04:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:25.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Jan 22 00:04:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Jan 22 00:04:25 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Jan 22 00:04:25 compute-0 nova_compute[247516]: 2026-01-22 00:04:25.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:04:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:26.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:26 compute-0 ceph-mon[74318]: pgmap v1390: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 2.6 MiB/s wr, 56 op/s
Jan 22 00:04:26 compute-0 ceph-mon[74318]: osdmap e176: 3 total, 3 up, 3 in
Jan 22 00:04:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/586765779' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:04:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/586765779' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:04:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 2.6 MiB/s wr, 66 op/s
Jan 22 00:04:26 compute-0 nova_compute[247516]: 2026-01-22 00:04:26.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:04:26 compute-0 nova_compute[247516]: 2026-01-22 00:04:26.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:04:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:27.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:28.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:04:28 compute-0 ceph-mon[74318]: pgmap v1392: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 2.6 MiB/s wr, 66 op/s
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:04:28.664753) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040268664842, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2202, "num_deletes": 254, "total_data_size": 3969977, "memory_usage": 4021696, "flush_reason": "Manual Compaction"}
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040268716131, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3890321, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28919, "largest_seqno": 31119, "table_properties": {"data_size": 3880250, "index_size": 6437, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20890, "raw_average_key_size": 20, "raw_value_size": 3860122, "raw_average_value_size": 3829, "num_data_blocks": 280, "num_entries": 1008, "num_filter_entries": 1008, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769040054, "oldest_key_time": 1769040054, "file_creation_time": 1769040268, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 51441 microseconds, and 19771 cpu microseconds.
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:04:28.716213) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3890321 bytes OK
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:04:28.716244) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:04:28.718606) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:04:28.718621) EVENT_LOG_v1 {"time_micros": 1769040268718616, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:04:28.718647) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3960981, prev total WAL file size 3960981, number of live WAL files 2.
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:04:28.719958) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3799KB)], [65(8658KB)]
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040268720149, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 12756797, "oldest_snapshot_seqno": -1}
Jan 22 00:04:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 895 B/s wr, 16 op/s
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5710 keys, 10767648 bytes, temperature: kUnknown
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040268867344, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 10767648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10727661, "index_size": 24581, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14341, "raw_key_size": 144210, "raw_average_key_size": 25, "raw_value_size": 10622881, "raw_average_value_size": 1860, "num_data_blocks": 998, "num_entries": 5710, "num_filter_entries": 5710, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769040268, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:04:28.867749) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10767648 bytes
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:04:28.915073) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 86.6 rd, 73.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.5 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(6.0) write-amplify(2.8) OK, records in: 6236, records dropped: 526 output_compression: NoCompression
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:04:28.915105) EVENT_LOG_v1 {"time_micros": 1769040268915090, "job": 36, "event": "compaction_finished", "compaction_time_micros": 147316, "compaction_time_cpu_micros": 51687, "output_level": 6, "num_output_files": 1, "total_output_size": 10767648, "num_input_records": 6236, "num_output_records": 5710, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040268916396, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040268919499, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:04:28.719819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:04:28.919608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:04:28.919614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:04:28.919619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:04:28.919623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:04:28 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:04:28.919626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:04:28 compute-0 nova_compute[247516]: 2026-01-22 00:04:28.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:04:29 compute-0 nova_compute[247516]: 2026-01-22 00:04:29.024 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:04:29 compute-0 nova_compute[247516]: 2026-01-22 00:04:29.024 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:04:29 compute-0 nova_compute[247516]: 2026-01-22 00:04:29.024 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:04:29 compute-0 nova_compute[247516]: 2026-01-22 00:04:29.025 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:04:29 compute-0 nova_compute[247516]: 2026-01-22 00:04:29.025 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:04:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:29.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:04:29 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3401605087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:04:29 compute-0 nova_compute[247516]: 2026-01-22 00:04:29.507 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:04:29 compute-0 nova_compute[247516]: 2026-01-22 00:04:29.739 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:04:29 compute-0 nova_compute[247516]: 2026-01-22 00:04:29.741 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5161MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:04:29 compute-0 nova_compute[247516]: 2026-01-22 00:04:29.742 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:04:29 compute-0 nova_compute[247516]: 2026-01-22 00:04:29.742 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:04:29 compute-0 nova_compute[247516]: 2026-01-22 00:04:29.857 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:04:29 compute-0 nova_compute[247516]: 2026-01-22 00:04:29.857 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:04:29 compute-0 nova_compute[247516]: 2026-01-22 00:04:29.858 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:04:29 compute-0 nova_compute[247516]: 2026-01-22 00:04:29.909 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:04:30 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3401605087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:04:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:30.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:04:30 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/552152528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:04:30 compute-0 nova_compute[247516]: 2026-01-22 00:04:30.413 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:04:30 compute-0 nova_compute[247516]: 2026-01-22 00:04:30.420 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:04:30 compute-0 nova_compute[247516]: 2026-01-22 00:04:30.435 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:04:30 compute-0 nova_compute[247516]: 2026-01-22 00:04:30.437 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:04:30 compute-0 nova_compute[247516]: 2026-01-22 00:04:30.437 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:04:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 22 00:04:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:31.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:31 compute-0 ceph-mon[74318]: pgmap v1393: 305 pgs: 305 active+clean; 62 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 895 B/s wr, 16 op/s
Jan 22 00:04:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/552152528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:04:32 compute-0 ceph-mon[74318]: pgmap v1394: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 22 00:04:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:32.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 22 00:04:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:33.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:33 compute-0 nova_compute[247516]: 2026-01-22 00:04:33.438 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:04:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:04:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Jan 22 00:04:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Jan 22 00:04:33 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Jan 22 00:04:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:34.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:34 compute-0 sudo[270507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:04:34 compute-0 sudo[270507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:34 compute-0 sudo[270507]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:34 compute-0 sudo[270532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:04:34 compute-0 sudo[270532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:34 compute-0 sudo[270532]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:34 compute-0 ceph-mon[74318]: pgmap v1395: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 22 00:04:34 compute-0 ceph-mon[74318]: osdmap e177: 3 total, 3 up, 3 in
Jan 22 00:04:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 1.3 KiB/s wr, 20 op/s
Jan 22 00:04:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:35.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:36.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:36 compute-0 ceph-mon[74318]: pgmap v1397: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 1.3 KiB/s wr, 20 op/s
Jan 22 00:04:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Jan 22 00:04:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:04:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:37.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:04:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:38.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:04:38 compute-0 ceph-mon[74318]: pgmap v1398: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Jan 22 00:04:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Jan 22 00:04:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:39.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:04:39
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', '.rgw.root']
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:04:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:04:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:40.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:40 compute-0 ceph-mon[74318]: pgmap v1399: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Jan 22 00:04:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:41.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:41 compute-0 sudo[270561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:04:41 compute-0 sudo[270561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:41 compute-0 sudo[270561]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:42 compute-0 sudo[270586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:04:42 compute-0 sudo[270586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:42 compute-0 sudo[270586]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:42 compute-0 sudo[270611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:04:42 compute-0 sudo[270611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:42 compute-0 sudo[270611]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:42 compute-0 sudo[270636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 00:04:42 compute-0 sudo[270636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:42.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:42 compute-0 sudo[270636]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:42 compute-0 ceph-mon[74318]: pgmap v1400: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:43.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:04:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 00:04:44 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:04:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 00:04:44 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:04:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:44.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:44 compute-0 ceph-mon[74318]: pgmap v1401: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:04:44 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:04:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:04:45 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:04:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:04:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:04:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:04:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:04:45 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 23e8682f-8b88-4170-9734-2fa8d90c7ab2 does not exist
Jan 22 00:04:45 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 4c93d82e-62c5-4ffd-a02d-9d42b47153e9 does not exist
Jan 22 00:04:45 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 4caaddff-14e8-4c9b-ae19-df80ea0d6af7 does not exist
Jan 22 00:04:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:04:45 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:04:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:45.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:04:45 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:04:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:04:45 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:04:45 compute-0 sudo[270693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:04:45 compute-0 sudo[270693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:45 compute-0 sudo[270693]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:45 compute-0 sudo[270718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:04:45 compute-0 sudo[270718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:45 compute-0 sudo[270718]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:45 compute-0 sudo[270743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:04:45 compute-0 sudo[270743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:45 compute-0 sudo[270743]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:45 compute-0 sudo[270768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:04:45 compute-0 sudo[270768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:45 compute-0 podman[270835]: 2026-01-22 00:04:45.756721875 +0000 UTC m=+0.021031378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:04:45 compute-0 podman[270835]: 2026-01-22 00:04:45.945007763 +0000 UTC m=+0.209317256 container create d4817f61b18eec03e8cd20a9e2a1254352dfb7c55d1c6e51020e961e512d1615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 00:04:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:04:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:04:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:04:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:04:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:04:45 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:04:46 compute-0 systemd[1]: Started libpod-conmon-d4817f61b18eec03e8cd20a9e2a1254352dfb7c55d1c6e51020e961e512d1615.scope.
Jan 22 00:04:46 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:04:46 compute-0 podman[270835]: 2026-01-22 00:04:46.141166334 +0000 UTC m=+0.405475847 container init d4817f61b18eec03e8cd20a9e2a1254352dfb7c55d1c6e51020e961e512d1615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 00:04:46 compute-0 podman[270835]: 2026-01-22 00:04:46.149944334 +0000 UTC m=+0.414253827 container start d4817f61b18eec03e8cd20a9e2a1254352dfb7c55d1c6e51020e961e512d1615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:04:46 compute-0 podman[270835]: 2026-01-22 00:04:46.153961487 +0000 UTC m=+0.418271010 container attach d4817f61b18eec03e8cd20a9e2a1254352dfb7c55d1c6e51020e961e512d1615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 00:04:46 compute-0 upbeat_chandrasekhar[270851]: 167 167
Jan 22 00:04:46 compute-0 systemd[1]: libpod-d4817f61b18eec03e8cd20a9e2a1254352dfb7c55d1c6e51020e961e512d1615.scope: Deactivated successfully.
Jan 22 00:04:46 compute-0 podman[270835]: 2026-01-22 00:04:46.158548729 +0000 UTC m=+0.422858232 container died d4817f61b18eec03e8cd20a9e2a1254352dfb7c55d1c6e51020e961e512d1615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 00:04:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ea70fb13e7ebfdf746869ba9d1f6641d52970847a96c26ed1ca4dfd2dd627f2-merged.mount: Deactivated successfully.
Jan 22 00:04:46 compute-0 podman[270835]: 2026-01-22 00:04:46.203488302 +0000 UTC m=+0.467797785 container remove d4817f61b18eec03e8cd20a9e2a1254352dfb7c55d1c6e51020e961e512d1615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 00:04:46 compute-0 systemd[1]: libpod-conmon-d4817f61b18eec03e8cd20a9e2a1254352dfb7c55d1c6e51020e961e512d1615.scope: Deactivated successfully.
Jan 22 00:04:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:46.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:46 compute-0 podman[270875]: 2026-01-22 00:04:46.42583618 +0000 UTC m=+0.062671242 container create 7333ef1b7fa496574f78f2842ffa6eb19a5d7b9401b754505d5d89c10583040c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_euclid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:04:46 compute-0 systemd[1]: Started libpod-conmon-7333ef1b7fa496574f78f2842ffa6eb19a5d7b9401b754505d5d89c10583040c.scope.
Jan 22 00:04:46 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:04:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eab8ac47c690342fbb970426e8fb05e4b42857f81fdf18122f375658f59abbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:04:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eab8ac47c690342fbb970426e8fb05e4b42857f81fdf18122f375658f59abbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:04:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eab8ac47c690342fbb970426e8fb05e4b42857f81fdf18122f375658f59abbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:04:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eab8ac47c690342fbb970426e8fb05e4b42857f81fdf18122f375658f59abbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:04:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eab8ac47c690342fbb970426e8fb05e4b42857f81fdf18122f375658f59abbd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:04:46 compute-0 podman[270875]: 2026-01-22 00:04:46.402959755 +0000 UTC m=+0.039794877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:04:46 compute-0 podman[270875]: 2026-01-22 00:04:46.50769282 +0000 UTC m=+0.144527932 container init 7333ef1b7fa496574f78f2842ffa6eb19a5d7b9401b754505d5d89c10583040c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_euclid, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:04:46 compute-0 podman[270875]: 2026-01-22 00:04:46.515059737 +0000 UTC m=+0.151894839 container start 7333ef1b7fa496574f78f2842ffa6eb19a5d7b9401b754505d5d89c10583040c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_euclid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:04:46 compute-0 podman[270875]: 2026-01-22 00:04:46.519338229 +0000 UTC m=+0.156173301 container attach 7333ef1b7fa496574f78f2842ffa6eb19a5d7b9401b754505d5d89c10583040c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_euclid, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 00:04:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:46 compute-0 ceph-mon[74318]: pgmap v1402: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:47.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:47 compute-0 recursing_euclid[270892]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:04:47 compute-0 recursing_euclid[270892]: --> relative data size: 1.0
Jan 22 00:04:47 compute-0 recursing_euclid[270892]: --> All data devices are unavailable
Jan 22 00:04:47 compute-0 systemd[1]: libpod-7333ef1b7fa496574f78f2842ffa6eb19a5d7b9401b754505d5d89c10583040c.scope: Deactivated successfully.
Jan 22 00:04:47 compute-0 podman[270875]: 2026-01-22 00:04:47.429190947 +0000 UTC m=+1.066026049 container died 7333ef1b7fa496574f78f2842ffa6eb19a5d7b9401b754505d5d89c10583040c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_euclid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 00:04:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3eab8ac47c690342fbb970426e8fb05e4b42857f81fdf18122f375658f59abbd-merged.mount: Deactivated successfully.
Jan 22 00:04:47 compute-0 podman[270875]: 2026-01-22 00:04:47.539323898 +0000 UTC m=+1.176158970 container remove 7333ef1b7fa496574f78f2842ffa6eb19a5d7b9401b754505d5d89c10583040c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 00:04:47 compute-0 systemd[1]: libpod-conmon-7333ef1b7fa496574f78f2842ffa6eb19a5d7b9401b754505d5d89c10583040c.scope: Deactivated successfully.
Jan 22 00:04:47 compute-0 sudo[270768]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:47 compute-0 podman[270909]: 2026-01-22 00:04:47.60172238 +0000 UTC m=+0.132092679 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 00:04:47 compute-0 sudo[270946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:04:47 compute-0 sudo[270946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:47 compute-0 sudo[270946]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:47 compute-0 sudo[270972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:04:47 compute-0 sudo[270972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:47 compute-0 sudo[270972]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:47 compute-0 sudo[270997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:04:47 compute-0 sudo[270997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:47 compute-0 sudo[270997]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:47 compute-0 sudo[271022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:04:47 compute-0 sudo[271022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:48 compute-0 ceph-mon[74318]: pgmap v1403: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:04:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:48.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:04:48 compute-0 podman[271085]: 2026-01-22 00:04:48.375372054 +0000 UTC m=+0.060902027 container create 995f54c79477cc672983a6c34ccd403f0f82b907e7c51731bfd751e28079a37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:04:48 compute-0 systemd[1]: Started libpod-conmon-995f54c79477cc672983a6c34ccd403f0f82b907e7c51731bfd751e28079a37d.scope.
Jan 22 00:04:48 compute-0 podman[271085]: 2026-01-22 00:04:48.347910788 +0000 UTC m=+0.033440791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:04:48 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:04:48 compute-0 podman[271085]: 2026-01-22 00:04:48.47105725 +0000 UTC m=+0.156587253 container init 995f54c79477cc672983a6c34ccd403f0f82b907e7c51731bfd751e28079a37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 00:04:48 compute-0 podman[271085]: 2026-01-22 00:04:48.48304674 +0000 UTC m=+0.168576723 container start 995f54c79477cc672983a6c34ccd403f0f82b907e7c51731bfd751e28079a37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 00:04:48 compute-0 podman[271085]: 2026-01-22 00:04:48.487455426 +0000 UTC m=+0.172985449 container attach 995f54c79477cc672983a6c34ccd403f0f82b907e7c51731bfd751e28079a37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 00:04:48 compute-0 determined_kirch[271101]: 167 167
Jan 22 00:04:48 compute-0 systemd[1]: libpod-995f54c79477cc672983a6c34ccd403f0f82b907e7c51731bfd751e28079a37d.scope: Deactivated successfully.
Jan 22 00:04:48 compute-0 podman[271085]: 2026-01-22 00:04:48.492880563 +0000 UTC m=+0.178410496 container died 995f54c79477cc672983a6c34ccd403f0f82b907e7c51731bfd751e28079a37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:04:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f0820a79fb19efdca570063cce620ff2f294b1ba3f818b10675686030f1f8da-merged.mount: Deactivated successfully.
Jan 22 00:04:48 compute-0 podman[271085]: 2026-01-22 00:04:48.535967079 +0000 UTC m=+0.221497052 container remove 995f54c79477cc672983a6c34ccd403f0f82b907e7c51731bfd751e28079a37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 00:04:48 compute-0 systemd[1]: libpod-conmon-995f54c79477cc672983a6c34ccd403f0f82b907e7c51731bfd751e28079a37d.scope: Deactivated successfully.
Jan 22 00:04:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:04:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:04:48.764 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:04:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:04:48.765 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:04:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:04:48.765 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:04:48 compute-0 podman[271127]: 2026-01-22 00:04:48.793067786 +0000 UTC m=+0.066430686 container create bfb31f0999f14e5916d43b78f32ece0e6ecd2f54bbc228ca5a655f943a25173e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hawking, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:04:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:48 compute-0 systemd[1]: Started libpod-conmon-bfb31f0999f14e5916d43b78f32ece0e6ecd2f54bbc228ca5a655f943a25173e.scope.
Jan 22 00:04:48 compute-0 podman[271127]: 2026-01-22 00:04:48.76718774 +0000 UTC m=+0.040550730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:04:48 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9af3d57689488a87e973ee54961b194618368d21084189a601949a50b37c56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9af3d57689488a87e973ee54961b194618368d21084189a601949a50b37c56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9af3d57689488a87e973ee54961b194618368d21084189a601949a50b37c56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9af3d57689488a87e973ee54961b194618368d21084189a601949a50b37c56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:04:48 compute-0 podman[271127]: 2026-01-22 00:04:48.918541361 +0000 UTC m=+0.191904321 container init bfb31f0999f14e5916d43b78f32ece0e6ecd2f54bbc228ca5a655f943a25173e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 00:04:48 compute-0 podman[271127]: 2026-01-22 00:04:48.930811528 +0000 UTC m=+0.204174458 container start bfb31f0999f14e5916d43b78f32ece0e6ecd2f54bbc228ca5a655f943a25173e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:04:48 compute-0 podman[271127]: 2026-01-22 00:04:48.935059219 +0000 UTC m=+0.208422199 container attach bfb31f0999f14e5916d43b78f32ece0e6ecd2f54bbc228ca5a655f943a25173e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 00:04:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:49.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:49 compute-0 determined_hawking[271143]: {
Jan 22 00:04:49 compute-0 determined_hawking[271143]:     "1": [
Jan 22 00:04:49 compute-0 determined_hawking[271143]:         {
Jan 22 00:04:49 compute-0 determined_hawking[271143]:             "devices": [
Jan 22 00:04:49 compute-0 determined_hawking[271143]:                 "/dev/loop3"
Jan 22 00:04:49 compute-0 determined_hawking[271143]:             ],
Jan 22 00:04:49 compute-0 determined_hawking[271143]:             "lv_name": "ceph_lv0",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:             "lv_size": "7511998464",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:             "name": "ceph_lv0",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:             "tags": {
Jan 22 00:04:49 compute-0 determined_hawking[271143]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:                 "ceph.cluster_name": "ceph",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:                 "ceph.crush_device_class": "",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:                 "ceph.encrypted": "0",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:                 "ceph.osd_id": "1",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:                 "ceph.type": "block",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:                 "ceph.vdo": "0"
Jan 22 00:04:49 compute-0 determined_hawking[271143]:             },
Jan 22 00:04:49 compute-0 determined_hawking[271143]:             "type": "block",
Jan 22 00:04:49 compute-0 determined_hawking[271143]:             "vg_name": "ceph_vg0"
Jan 22 00:04:49 compute-0 determined_hawking[271143]:         }
Jan 22 00:04:49 compute-0 determined_hawking[271143]:     ]
Jan 22 00:04:49 compute-0 determined_hawking[271143]: }
Jan 22 00:04:49 compute-0 systemd[1]: libpod-bfb31f0999f14e5916d43b78f32ece0e6ecd2f54bbc228ca5a655f943a25173e.scope: Deactivated successfully.
Jan 22 00:04:49 compute-0 podman[271127]: 2026-01-22 00:04:49.740078009 +0000 UTC m=+1.013440939 container died bfb31f0999f14e5916d43b78f32ece0e6ecd2f54bbc228ca5a655f943a25173e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:04:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d9af3d57689488a87e973ee54961b194618368d21084189a601949a50b37c56-merged.mount: Deactivated successfully.
Jan 22 00:04:49 compute-0 podman[271127]: 2026-01-22 00:04:49.813104078 +0000 UTC m=+1.086467008 container remove bfb31f0999f14e5916d43b78f32ece0e6ecd2f54bbc228ca5a655f943a25173e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hawking, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:04:49 compute-0 systemd[1]: libpod-conmon-bfb31f0999f14e5916d43b78f32ece0e6ecd2f54bbc228ca5a655f943a25173e.scope: Deactivated successfully.
Jan 22 00:04:49 compute-0 sudo[271022]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:49 compute-0 sudo[271165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:04:49 compute-0 sudo[271165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:49 compute-0 sudo[271165]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:50 compute-0 sudo[271190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:04:50 compute-0 sudo[271190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:50 compute-0 sudo[271190]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:50 compute-0 sudo[271215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:04:50 compute-0 sudo[271215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:50 compute-0 sudo[271215]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:50 compute-0 ceph-mon[74318]: pgmap v1404: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:50 compute-0 sudo[271240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:04:50 compute-0 sudo[271240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:50.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:50 compute-0 podman[271308]: 2026-01-22 00:04:50.599977769 +0000 UTC m=+0.057769280 container create fe6baf2e27e0cce2c8db59414536cdea0128b4b64c93404284c895fe55710c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swartz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 00:04:50 compute-0 systemd[1]: Started libpod-conmon-fe6baf2e27e0cce2c8db59414536cdea0128b4b64c93404284c895fe55710c07.scope.
Jan 22 00:04:50 compute-0 podman[271308]: 2026-01-22 00:04:50.572592796 +0000 UTC m=+0.030384317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:04:50 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:04:50 compute-0 podman[271308]: 2026-01-22 00:04:50.766051753 +0000 UTC m=+0.223843304 container init fe6baf2e27e0cce2c8db59414536cdea0128b4b64c93404284c895fe55710c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 00:04:50 compute-0 podman[271308]: 2026-01-22 00:04:50.772957925 +0000 UTC m=+0.230749436 container start fe6baf2e27e0cce2c8db59414536cdea0128b4b64c93404284c895fe55710c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 00:04:50 compute-0 relaxed_swartz[271324]: 167 167
Jan 22 00:04:50 compute-0 systemd[1]: libpod-fe6baf2e27e0cce2c8db59414536cdea0128b4b64c93404284c895fe55710c07.scope: Deactivated successfully.
Jan 22 00:04:50 compute-0 podman[271308]: 2026-01-22 00:04:50.789239687 +0000 UTC m=+0.247031248 container attach fe6baf2e27e0cce2c8db59414536cdea0128b4b64c93404284c895fe55710c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swartz, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:04:50 compute-0 podman[271308]: 2026-01-22 00:04:50.789827495 +0000 UTC m=+0.247618996 container died fe6baf2e27e0cce2c8db59414536cdea0128b4b64c93404284c895fe55710c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:04:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-00b680a5fee7a37c9eabc62770836f3547cd59aa7c223654011fa6ede07d2e92-merged.mount: Deactivated successfully.
Jan 22 00:04:50 compute-0 podman[271308]: 2026-01-22 00:04:50.837043499 +0000 UTC m=+0.294835000 container remove fe6baf2e27e0cce2c8db59414536cdea0128b4b64c93404284c895fe55710c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swartz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:04:50 compute-0 systemd[1]: libpod-conmon-fe6baf2e27e0cce2c8db59414536cdea0128b4b64c93404284c895fe55710c07.scope: Deactivated successfully.
Jan 22 00:04:51 compute-0 podman[271348]: 2026-01-22 00:04:51.077890316 +0000 UTC m=+0.068034596 container create 8bfacdeb878de72ff24dc18cd72969b8c0e728e4d16aa5069be8d349fd48e854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_leavitt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 00:04:51 compute-0 systemd[1]: Started libpod-conmon-8bfacdeb878de72ff24dc18cd72969b8c0e728e4d16aa5069be8d349fd48e854.scope.
Jan 22 00:04:51 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:04:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b97379ff99123f6036baa4f317ad8848ca5dfab7b75a40cbb9dc9164b0698676/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:04:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b97379ff99123f6036baa4f317ad8848ca5dfab7b75a40cbb9dc9164b0698676/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:04:51 compute-0 podman[271348]: 2026-01-22 00:04:51.054974231 +0000 UTC m=+0.045118581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:04:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b97379ff99123f6036baa4f317ad8848ca5dfab7b75a40cbb9dc9164b0698676/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:04:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b97379ff99123f6036baa4f317ad8848ca5dfab7b75a40cbb9dc9164b0698676/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:04:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:51.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:51 compute-0 podman[271348]: 2026-01-22 00:04:51.165032629 +0000 UTC m=+0.155176939 container init 8bfacdeb878de72ff24dc18cd72969b8c0e728e4d16aa5069be8d349fd48e854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 00:04:51 compute-0 podman[271348]: 2026-01-22 00:04:51.180798395 +0000 UTC m=+0.170942665 container start 8bfacdeb878de72ff24dc18cd72969b8c0e728e4d16aa5069be8d349fd48e854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_leavitt, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:04:51 compute-0 podman[271348]: 2026-01-22 00:04:51.184340594 +0000 UTC m=+0.174484904 container attach 8bfacdeb878de72ff24dc18cd72969b8c0e728e4d16aa5069be8d349fd48e854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_leavitt, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:04:52 compute-0 pensive_leavitt[271365]: {
Jan 22 00:04:52 compute-0 pensive_leavitt[271365]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:04:52 compute-0 pensive_leavitt[271365]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:04:52 compute-0 pensive_leavitt[271365]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:04:52 compute-0 pensive_leavitt[271365]:         "osd_id": 1,
Jan 22 00:04:52 compute-0 pensive_leavitt[271365]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:04:52 compute-0 pensive_leavitt[271365]:         "type": "bluestore"
Jan 22 00:04:52 compute-0 pensive_leavitt[271365]:     }
Jan 22 00:04:52 compute-0 pensive_leavitt[271365]: }
Jan 22 00:04:52 compute-0 systemd[1]: libpod-8bfacdeb878de72ff24dc18cd72969b8c0e728e4d16aa5069be8d349fd48e854.scope: Deactivated successfully.
Jan 22 00:04:52 compute-0 ceph-mon[74318]: pgmap v1405: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:52 compute-0 podman[271387]: 2026-01-22 00:04:52.193746718 +0000 UTC m=+0.042299984 container died 8bfacdeb878de72ff24dc18cd72969b8c0e728e4d16aa5069be8d349fd48e854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_leavitt, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 00:04:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-b97379ff99123f6036baa4f317ad8848ca5dfab7b75a40cbb9dc9164b0698676-merged.mount: Deactivated successfully.
Jan 22 00:04:52 compute-0 podman[271387]: 2026-01-22 00:04:52.266286022 +0000 UTC m=+0.114839258 container remove 8bfacdeb878de72ff24dc18cd72969b8c0e728e4d16aa5069be8d349fd48e854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_leavitt, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:04:52 compute-0 systemd[1]: libpod-conmon-8bfacdeb878de72ff24dc18cd72969b8c0e728e4d16aa5069be8d349fd48e854.scope: Deactivated successfully.
Jan 22 00:04:52 compute-0 sudo[271240]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:04:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:04:52 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:04:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:52.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:52 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:04:52 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 9c505f91-76c8-4bdf-b4de-f8fc8863e539 does not exist
Jan 22 00:04:52 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 9bd281a8-f3ae-4ae9-b6e8-c381e8c1ea03 does not exist
Jan 22 00:04:52 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 0154dc17-7de6-4b7c-bb49-d3706d405d4f does not exist
Jan 22 00:04:52 compute-0 sudo[271402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:04:52 compute-0 sudo[271402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:52 compute-0 sudo[271402]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:52 compute-0 sudo[271427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:04:52 compute-0 sudo[271427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:52 compute-0 sudo[271427]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:04:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:53.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:04:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:04:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:04:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:04:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:04:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:54.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:04:54 compute-0 ceph-mon[74318]: pgmap v1406: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.217749627768472e-05 of space, bias 1.0, pg target 0.003653248883305416 quantized to 32 (current 32)
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:04:54 compute-0 sudo[271453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:04:54 compute-0 sudo[271453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:54 compute-0 sudo[271453]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:54 compute-0 sudo[271478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:04:54 compute-0 sudo[271478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:04:54 compute-0 sudo[271478]: pam_unix(sudo:session): session closed for user root
Jan 22 00:04:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:55.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:55 compute-0 podman[271504]: 2026-01-22 00:04:55.962110101 +0000 UTC m=+0.076871669 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 22 00:04:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:56.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:56 compute-0 ceph-mon[74318]: pgmap v1407: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:57.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:04:58.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:04:58 compute-0 ceph-mon[74318]: pgmap v1408: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:04:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:04:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:04:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:04:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:04:59.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:00.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:00 compute-0 ceph-mon[74318]: pgmap v1409: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:01.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:02.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:02 compute-0 ceph-mon[74318]: pgmap v1410: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:03.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:05:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:04.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:04 compute-0 ceph-mon[74318]: pgmap v1411: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:05.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:06.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:06 compute-0 ceph-mon[74318]: pgmap v1412: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:07.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:08.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:08 compute-0 ceph-mon[74318]: pgmap v1413: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:05:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:09 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 00:05:09 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 11K writes, 37K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 11K writes, 3448 syncs, 3.34 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3122 writes, 7194 keys, 3122 commit groups, 1.0 writes per commit group, ingest: 2.81 MB, 0.00 MB/s
                                           Interval WAL: 3122 writes, 1392 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 00:05:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:09.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:05:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:05:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:05:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:05:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:05:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:05:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:10.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:10 compute-0 ceph-mon[74318]: pgmap v1414: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:11.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:11 compute-0 ceph-mon[74318]: pgmap v1415: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:12.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 22 00:05:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:13.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:05:13 compute-0 ceph-mon[74318]: pgmap v1416: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 22 00:05:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:14.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:14 compute-0 sudo[271532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:05:14 compute-0 sudo[271532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 22 00:05:14 compute-0 sudo[271532]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:14 compute-0 sudo[271557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:05:14 compute-0 sudo[271557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:14 compute-0 sudo[271557]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:15.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:15 compute-0 ceph-mon[74318]: pgmap v1417: 305 pgs: 305 active+clean; 42 MiB data, 253 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 22 00:05:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:16.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 43 MiB data, 253 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 134 KiB/s wr, 27 op/s
Jan 22 00:05:16 compute-0 nova_compute[247516]: 2026-01-22 00:05:16.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:05:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:17.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:17 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:05:17.186 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 00:05:17 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:05:17.192 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 00:05:17 compute-0 podman[271584]: 2026-01-22 00:05:17.999073203 +0000 UTC m=+0.102227238 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 22 00:05:18 compute-0 nova_compute[247516]: 2026-01-22 00:05:18.009 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:05:18 compute-0 nova_compute[247516]: 2026-01-22 00:05:18.011 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:05:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:18.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:18 compute-0 ceph-mon[74318]: pgmap v1418: 305 pgs: 305 active+clean; 43 MiB data, 253 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 134 KiB/s wr, 27 op/s
Jan 22 00:05:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:05:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 43 MiB data, 253 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 134 KiB/s wr, 27 op/s
Jan 22 00:05:18 compute-0 nova_compute[247516]: 2026-01-22 00:05:18.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:05:18 compute-0 nova_compute[247516]: 2026-01-22 00:05:18.994 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:05:18 compute-0 nova_compute[247516]: 2026-01-22 00:05:18.995 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:05:19 compute-0 nova_compute[247516]: 2026-01-22 00:05:19.014 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:05:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:19.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:19 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3131213866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:05:19 compute-0 ceph-mgr[74614]: [devicehealth INFO root] Check health
Jan 22 00:05:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 22 00:05:19 compute-0 sudo[271612]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Jan 22 00:05:19 compute-0 sudo[271612]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 22 00:05:19 compute-0 sudo[271612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Jan 22 00:05:19 compute-0 sudo[271612]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 22 00:05:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 22 00:05:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 00:05:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 22 00:05:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 00:05:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 22 00:05:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 00:05:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 22 00:05:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 00:05:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 22 00:05:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 00:05:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 22 00:05:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 00:05:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 22 00:05:20 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 00:05:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 22 00:05:20 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 00:05:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 22 00:05:20 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 00:05:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:20.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:20 compute-0 ceph-mon[74318]: pgmap v1419: 305 pgs: 305 active+clean; 43 MiB data, 253 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 134 KiB/s wr, 27 op/s
Jan 22 00:05:20 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2162559024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:05:20 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 22 00:05:20 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 22 00:05:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 00:05:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 00:05:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 00:05:20 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 22 00:05:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 00:05:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 00:05:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 00:05:20 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 22 00:05:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 00:05:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 00:05:20 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 00:05:20 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 22 00:05:20 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 22 00:05:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 22 00:05:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:21.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:21 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 00:05:21 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2066710388' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:05:21 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 00:05:21 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2066710388' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:05:21 compute-0 nova_compute[247516]: 2026-01-22 00:05:21.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:05:21 compute-0 nova_compute[247516]: 2026-01-22 00:05:21.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 22 00:05:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:22.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:22 compute-0 ceph-mon[74318]: pgmap v1420: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 22 00:05:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2066710388' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:05:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2066710388' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:05:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3801701556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:05:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3351675009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:05:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 50 op/s
Jan 22 00:05:23 compute-0 nova_compute[247516]: 2026-01-22 00:05:23.007 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:05:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:05:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:23.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:05:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:05:23 compute-0 nova_compute[247516]: 2026-01-22 00:05:23.987 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:05:24 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:05:24.195 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 00:05:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:24.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:24 compute-0 ceph-mon[74318]: pgmap v1421: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 50 op/s
Jan 22 00:05:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 67 MiB data, 263 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 53 op/s
Jan 22 00:05:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:25.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:25 compute-0 nova_compute[247516]: 2026-01-22 00:05:25.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:05:25 compute-0 nova_compute[247516]: 2026-01-22 00:05:25.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:05:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:26.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:26 compute-0 ceph-mon[74318]: pgmap v1422: 305 pgs: 305 active+clean; 67 MiB data, 263 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 53 op/s
Jan 22 00:05:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3768309970' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:05:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3768309970' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:05:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 22 00:05:26 compute-0 nova_compute[247516]: 2026-01-22 00:05:26.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:05:27 compute-0 podman[271618]: 2026-01-22 00:05:27.023660599 +0000 UTC m=+0.133017088 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:05:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:27.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:27 compute-0 ceph-mon[74318]: pgmap v1423: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 22 00:05:27 compute-0 nova_compute[247516]: 2026-01-22 00:05:27.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:05:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:28.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:05:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.7 MiB/s wr, 31 op/s
Jan 22 00:05:28 compute-0 ceph-mon[74318]: pgmap v1424: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.7 MiB/s wr, 31 op/s
Jan 22 00:05:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:29.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:29 compute-0 nova_compute[247516]: 2026-01-22 00:05:29.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:05:30 compute-0 nova_compute[247516]: 2026-01-22 00:05:30.022 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:05:30 compute-0 nova_compute[247516]: 2026-01-22 00:05:30.023 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:05:30 compute-0 nova_compute[247516]: 2026-01-22 00:05:30.023 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:05:30 compute-0 nova_compute[247516]: 2026-01-22 00:05:30.024 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:05:30 compute-0 nova_compute[247516]: 2026-01-22 00:05:30.025 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:05:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:30.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:05:30 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2509483711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:05:30 compute-0 nova_compute[247516]: 2026-01-22 00:05:30.532 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:05:30 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2509483711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:05:30 compute-0 nova_compute[247516]: 2026-01-22 00:05:30.717 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:05:30 compute-0 nova_compute[247516]: 2026-01-22 00:05:30.719 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5187MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:05:30 compute-0 nova_compute[247516]: 2026-01-22 00:05:30.719 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:05:30 compute-0 nova_compute[247516]: 2026-01-22 00:05:30.720 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:05:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.7 MiB/s wr, 31 op/s
Jan 22 00:05:30 compute-0 nova_compute[247516]: 2026-01-22 00:05:30.861 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:05:30 compute-0 nova_compute[247516]: 2026-01-22 00:05:30.862 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:05:30 compute-0 nova_compute[247516]: 2026-01-22 00:05:30.862 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:05:30 compute-0 nova_compute[247516]: 2026-01-22 00:05:30.895 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:05:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:31.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:05:31 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/122127488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:05:31 compute-0 nova_compute[247516]: 2026-01-22 00:05:31.370 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:05:31 compute-0 nova_compute[247516]: 2026-01-22 00:05:31.376 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:05:31 compute-0 nova_compute[247516]: 2026-01-22 00:05:31.394 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:05:31 compute-0 nova_compute[247516]: 2026-01-22 00:05:31.396 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:05:31 compute-0 nova_compute[247516]: 2026-01-22 00:05:31.396 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:05:31 compute-0 nova_compute[247516]: 2026-01-22 00:05:31.396 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:05:31 compute-0 nova_compute[247516]: 2026-01-22 00:05:31.397 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 22 00:05:31 compute-0 nova_compute[247516]: 2026-01-22 00:05:31.412 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 22 00:05:31 compute-0 ceph-mon[74318]: pgmap v1425: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.7 MiB/s wr, 31 op/s
Jan 22 00:05:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/122127488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:05:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:32.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 22 KiB/s wr, 17 op/s
Jan 22 00:05:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:33.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:05:33 compute-0 ceph-mon[74318]: pgmap v1426: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 22 KiB/s wr, 17 op/s
Jan 22 00:05:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:34.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:34 compute-0 nova_compute[247516]: 2026-01-22 00:05:34.412 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:05:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 5.0 KiB/s rd, 22 KiB/s wr, 9 op/s
Jan 22 00:05:34 compute-0 sudo[271685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:05:34 compute-0 sudo[271685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:34 compute-0 sudo[271685]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:35 compute-0 sudo[271710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:05:35 compute-0 sudo[271710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:35 compute-0 sudo[271710]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:35.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:35 compute-0 ceph-mon[74318]: pgmap v1427: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 5.0 KiB/s rd, 22 KiB/s wr, 9 op/s
Jan 22 00:05:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:36.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 341 B/s wr, 5 op/s
Jan 22 00:05:36 compute-0 ceph-mon[74318]: pgmap v1428: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 341 B/s wr, 5 op/s
Jan 22 00:05:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:37.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:38.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:05:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:39.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:05:39
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['backups', '.mgr', 'volumes', 'default.rgw.log', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'images', 'cephfs.cephfs.meta']
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:05:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:05:39 compute-0 ceph-mon[74318]: pgmap v1429: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:40.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:41.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:41 compute-0 ceph-mon[74318]: pgmap v1430: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:42.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:42 compute-0 ceph-mon[74318]: pgmap v1431: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:43.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:05:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:44.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:45.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:45 compute-0 ceph-mon[74318]: pgmap v1432: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:46.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:46 compute-0 ceph-mon[74318]: pgmap v1433: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:47.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:05:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:48.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:05:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:05:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:05:48.765 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:05:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:05:48.766 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:05:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:05:48.766 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:05:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:48 compute-0 podman[271742]: 2026-01-22 00:05:48.995098352 +0000 UTC m=+0.109739169 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 00:05:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:49.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:49 compute-0 ceph-mon[74318]: pgmap v1434: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:05:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:50.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:05:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:50 compute-0 ceph-mon[74318]: pgmap v1435: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:51.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:52.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:53 compute-0 sudo[271770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:05:53 compute-0 sudo[271770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:53 compute-0 sudo[271770]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:53 compute-0 sudo[271795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:05:53 compute-0 sudo[271795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:53 compute-0 sudo[271795]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:53 compute-0 sudo[271820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:05:53 compute-0 sudo[271820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:53 compute-0 sudo[271820]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:53.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:53 compute-0 sudo[271845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 00:05:53 compute-0 sudo[271845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:53 compute-0 sudo[271845]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:05:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:05:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:05:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:05:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 00:05:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:05:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 00:05:53 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:05:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:05:53 compute-0 sudo[271890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:05:53 compute-0 sudo[271890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:53 compute-0 sudo[271890]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:53 compute-0 sudo[271915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:05:53 compute-0 sudo[271915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:53 compute-0 sudo[271915]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:53 compute-0 sudo[271940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:05:53 compute-0 sudo[271940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:53 compute-0 sudo[271940]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:53 compute-0 sudo[271965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 00:05:53 compute-0 sudo[271965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:53 compute-0 ceph-mon[74318]: pgmap v1436: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:05:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:05:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:05:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:05:53 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:05:54 compute-0 sudo[271965]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:54.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:05:54 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:05:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:05:54 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:05:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:05:54 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 1f7a046f-7047-4015-b4b8-7b437f5c479f does not exist
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev f5de1a95-a681-4f3d-99a6-c543fc026134 does not exist
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev efe3cd67-8d31-46a2-8353-58b469d08575 does not exist
Jan 22 00:05:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:05:54 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:05:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:05:54 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:05:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:05:54 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.217749627768472e-05 of space, bias 1.0, pg target 0.003653248883305416 quantized to 32 (current 32)
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:05:54 compute-0 sudo[272023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:05:54 compute-0 sudo[272023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:54 compute-0 sudo[272023]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:54 compute-0 sudo[272048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:05:54 compute-0 sudo[272048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:54 compute-0 sudo[272048]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:54 compute-0 sudo[272073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:05:54 compute-0 sudo[272073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:54 compute-0 sudo[272073]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:54 compute-0 sudo[272098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:05:54 compute-0 sudo[272098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 0 op/s
Jan 22 00:05:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:05:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:05:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:05:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:05:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:05:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:05:55 compute-0 sudo[272150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:05:55 compute-0 sudo[272150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:55 compute-0 sudo[272150]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:55.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:55 compute-0 podman[272188]: 2026-01-22 00:05:55.24975829 +0000 UTC m=+0.060689229 container create 1c7e4b77df58a1f954bf575175c5b61801ec5c432a238213ec2a2fa2e80a6737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hopper, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Jan 22 00:05:55 compute-0 sudo[272193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:05:55 compute-0 sudo[272193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:55 compute-0 sudo[272193]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:55 compute-0 systemd[1]: Started libpod-conmon-1c7e4b77df58a1f954bf575175c5b61801ec5c432a238213ec2a2fa2e80a6737.scope.
Jan 22 00:05:55 compute-0 podman[272188]: 2026-01-22 00:05:55.230120955 +0000 UTC m=+0.041051914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:05:55 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:05:55 compute-0 podman[272188]: 2026-01-22 00:05:55.342435764 +0000 UTC m=+0.153366813 container init 1c7e4b77df58a1f954bf575175c5b61801ec5c432a238213ec2a2fa2e80a6737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hopper, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 00:05:55 compute-0 podman[272188]: 2026-01-22 00:05:55.349006787 +0000 UTC m=+0.159937766 container start 1c7e4b77df58a1f954bf575175c5b61801ec5c432a238213ec2a2fa2e80a6737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hopper, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 00:05:55 compute-0 podman[272188]: 2026-01-22 00:05:55.353395592 +0000 UTC m=+0.164326581 container attach 1c7e4b77df58a1f954bf575175c5b61801ec5c432a238213ec2a2fa2e80a6737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 22 00:05:55 compute-0 competent_hopper[272230]: 167 167
Jan 22 00:05:55 compute-0 systemd[1]: libpod-1c7e4b77df58a1f954bf575175c5b61801ec5c432a238213ec2a2fa2e80a6737.scope: Deactivated successfully.
Jan 22 00:05:55 compute-0 podman[272188]: 2026-01-22 00:05:55.358456617 +0000 UTC m=+0.169387596 container died 1c7e4b77df58a1f954bf575175c5b61801ec5c432a238213ec2a2fa2e80a6737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hopper, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:05:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4172c3c6c3ac1821e4755b00a3b3f890bf3c39f7941b66cfe2d13628cddd5085-merged.mount: Deactivated successfully.
Jan 22 00:05:55 compute-0 podman[272188]: 2026-01-22 00:05:55.402466353 +0000 UTC m=+0.213397292 container remove 1c7e4b77df58a1f954bf575175c5b61801ec5c432a238213ec2a2fa2e80a6737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:05:55 compute-0 systemd[1]: libpod-conmon-1c7e4b77df58a1f954bf575175c5b61801ec5c432a238213ec2a2fa2e80a6737.scope: Deactivated successfully.
Jan 22 00:05:55 compute-0 podman[272254]: 2026-01-22 00:05:55.592537316 +0000 UTC m=+0.049065372 container create 8e0115246ed838a856b7abecdcfffcbee0af2c233637284970a4463b56b55a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 00:05:55 compute-0 systemd[1]: Started libpod-conmon-8e0115246ed838a856b7abecdcfffcbee0af2c233637284970a4463b56b55a32.scope.
Jan 22 00:05:55 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d590235d747aee3a55a1a21b01e1c76b5e3632bec35b68e1374b1d5cb5c8f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d590235d747aee3a55a1a21b01e1c76b5e3632bec35b68e1374b1d5cb5c8f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d590235d747aee3a55a1a21b01e1c76b5e3632bec35b68e1374b1d5cb5c8f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d590235d747aee3a55a1a21b01e1c76b5e3632bec35b68e1374b1d5cb5c8f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d590235d747aee3a55a1a21b01e1c76b5e3632bec35b68e1374b1d5cb5c8f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:05:55 compute-0 podman[272254]: 2026-01-22 00:05:55.572783878 +0000 UTC m=+0.029311944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:05:55 compute-0 podman[272254]: 2026-01-22 00:05:55.669551017 +0000 UTC m=+0.126079063 container init 8e0115246ed838a856b7abecdcfffcbee0af2c233637284970a4463b56b55a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 00:05:55 compute-0 podman[272254]: 2026-01-22 00:05:55.683621571 +0000 UTC m=+0.140149647 container start 8e0115246ed838a856b7abecdcfffcbee0af2c233637284970a4463b56b55a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 00:05:55 compute-0 podman[272254]: 2026-01-22 00:05:55.689352897 +0000 UTC m=+0.145881043 container attach 8e0115246ed838a856b7abecdcfffcbee0af2c233637284970a4463b56b55a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:05:55 compute-0 ceph-mon[74318]: pgmap v1437: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 0 op/s
Jan 22 00:05:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:56.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:56 compute-0 stupefied_mcnulty[272270]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:05:56 compute-0 stupefied_mcnulty[272270]: --> relative data size: 1.0
Jan 22 00:05:56 compute-0 stupefied_mcnulty[272270]: --> All data devices are unavailable
Jan 22 00:05:56 compute-0 systemd[1]: libpod-8e0115246ed838a856b7abecdcfffcbee0af2c233637284970a4463b56b55a32.scope: Deactivated successfully.
Jan 22 00:05:56 compute-0 podman[272254]: 2026-01-22 00:05:56.543331685 +0000 UTC m=+0.999859771 container died 8e0115246ed838a856b7abecdcfffcbee0af2c233637284970a4463b56b55a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mcnulty, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 00:05:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-04d590235d747aee3a55a1a21b01e1c76b5e3632bec35b68e1374b1d5cb5c8f4-merged.mount: Deactivated successfully.
Jan 22 00:05:56 compute-0 podman[272254]: 2026-01-22 00:05:56.622135962 +0000 UTC m=+1.078664048 container remove 8e0115246ed838a856b7abecdcfffcbee0af2c233637284970a4463b56b55a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mcnulty, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 00:05:56 compute-0 systemd[1]: libpod-conmon-8e0115246ed838a856b7abecdcfffcbee0af2c233637284970a4463b56b55a32.scope: Deactivated successfully.
Jan 22 00:05:56 compute-0 sudo[272098]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:56 compute-0 sudo[272297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:05:56 compute-0 sudo[272297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:56 compute-0 sudo[272297]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:56 compute-0 sudo[272322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:05:56 compute-0 sudo[272322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:56 compute-0 sudo[272322]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 6 op/s
Jan 22 00:05:56 compute-0 sudo[272347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:05:56 compute-0 ceph-mon[74318]: pgmap v1438: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 6 op/s
Jan 22 00:05:56 compute-0 sudo[272347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:56 compute-0 sudo[272347]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:57 compute-0 sudo[272372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:05:57 compute-0 sudo[272372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:57.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:57 compute-0 podman[272438]: 2026-01-22 00:05:57.490476992 +0000 UTC m=+0.061158385 container create 684986a633702eb15920af89c9d6ef92b01924456519e3c55910b8102c6cf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 22 00:05:57 compute-0 systemd[1]: Started libpod-conmon-684986a633702eb15920af89c9d6ef92b01924456519e3c55910b8102c6cf922.scope.
Jan 22 00:05:57 compute-0 podman[272438]: 2026-01-22 00:05:57.470494906 +0000 UTC m=+0.041176279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:05:57 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:05:57 compute-0 podman[272438]: 2026-01-22 00:05:57.588406127 +0000 UTC m=+0.159087520 container init 684986a633702eb15920af89c9d6ef92b01924456519e3c55910b8102c6cf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:05:57 compute-0 podman[272438]: 2026-01-22 00:05:57.602296295 +0000 UTC m=+0.172977688 container start 684986a633702eb15920af89c9d6ef92b01924456519e3c55910b8102c6cf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 00:05:57 compute-0 podman[272438]: 2026-01-22 00:05:57.607706461 +0000 UTC m=+0.178387884 container attach 684986a633702eb15920af89c9d6ef92b01924456519e3c55910b8102c6cf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 00:05:57 compute-0 nervous_lichterman[272457]: 167 167
Jan 22 00:05:57 compute-0 systemd[1]: libpod-684986a633702eb15920af89c9d6ef92b01924456519e3c55910b8102c6cf922.scope: Deactivated successfully.
Jan 22 00:05:57 compute-0 podman[272438]: 2026-01-22 00:05:57.610458266 +0000 UTC m=+0.181139659 container died 684986a633702eb15920af89c9d6ef92b01924456519e3c55910b8102c6cf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 00:05:57 compute-0 podman[272453]: 2026-01-22 00:05:57.63718959 +0000 UTC m=+0.094257094 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 22 00:05:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-df63b5d7df0cfd57a567839144004cca1b2e819c1f39048abdb17ced5097fda0-merged.mount: Deactivated successfully.
Jan 22 00:05:57 compute-0 podman[272438]: 2026-01-22 00:05:57.670159765 +0000 UTC m=+0.240841128 container remove 684986a633702eb15920af89c9d6ef92b01924456519e3c55910b8102c6cf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:05:57 compute-0 systemd[1]: libpod-conmon-684986a633702eb15920af89c9d6ef92b01924456519e3c55910b8102c6cf922.scope: Deactivated successfully.
Jan 22 00:05:57 compute-0 podman[272500]: 2026-01-22 00:05:57.857931317 +0000 UTC m=+0.046489653 container create bb77b35726c9092716af7062b260e6245664a5b52b1fae177d7347c904eea622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:05:57 compute-0 systemd[1]: Started libpod-conmon-bb77b35726c9092716af7062b260e6245664a5b52b1fae177d7347c904eea622.scope.
Jan 22 00:05:57 compute-0 podman[272500]: 2026-01-22 00:05:57.837030383 +0000 UTC m=+0.025588729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:05:57 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae311648991ddb7eb4ba3a18907eb9197cdfdadd27c59c56872271b3ea6cbbfc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae311648991ddb7eb4ba3a18907eb9197cdfdadd27c59c56872271b3ea6cbbfc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae311648991ddb7eb4ba3a18907eb9197cdfdadd27c59c56872271b3ea6cbbfc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae311648991ddb7eb4ba3a18907eb9197cdfdadd27c59c56872271b3ea6cbbfc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:05:57 compute-0 podman[272500]: 2026-01-22 00:05:57.97461598 +0000 UTC m=+0.163174316 container init bb77b35726c9092716af7062b260e6245664a5b52b1fae177d7347c904eea622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:05:57 compute-0 podman[272500]: 2026-01-22 00:05:57.986379793 +0000 UTC m=+0.174938129 container start bb77b35726c9092716af7062b260e6245664a5b52b1fae177d7347c904eea622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:05:57 compute-0 podman[272500]: 2026-01-22 00:05:57.992660466 +0000 UTC m=+0.181218792 container attach bb77b35726c9092716af7062b260e6245664a5b52b1fae177d7347c904eea622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:05:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:05:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:05:58.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:05:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:05:58.629217) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040358629296, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1332, "num_deletes": 506, "total_data_size": 1717385, "memory_usage": 1753560, "flush_reason": "Manual Compaction"}
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040358645998, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1091216, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31120, "largest_seqno": 32451, "table_properties": {"data_size": 1086217, "index_size": 1883, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15775, "raw_average_key_size": 19, "raw_value_size": 1073443, "raw_average_value_size": 1338, "num_data_blocks": 82, "num_entries": 802, "num_filter_entries": 802, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769040269, "oldest_key_time": 1769040269, "file_creation_time": 1769040358, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 16941 microseconds, and 10974 cpu microseconds.
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:05:58.646160) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1091216 bytes OK
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:05:58.646208) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:05:58.648596) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:05:58.648609) EVENT_LOG_v1 {"time_micros": 1769040358648605, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:05:58.648628) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1710399, prev total WAL file size 1710399, number of live WAL files 2.
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:05:58.649688) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303030' seq:72057594037927935, type:22 .. '6D6772737461740031323531' seq:0, type:0; will stop at (end)
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1065KB)], [68(10MB)]
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040358649885, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 11858864, "oldest_snapshot_seqno": -1}
Jan 22 00:05:58 compute-0 stoic_wiles[272517]: {
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:     "1": [
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:         {
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:             "devices": [
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:                 "/dev/loop3"
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:             ],
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:             "lv_name": "ceph_lv0",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:             "lv_size": "7511998464",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:             "name": "ceph_lv0",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:             "tags": {
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:                 "ceph.cluster_name": "ceph",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:                 "ceph.crush_device_class": "",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:                 "ceph.encrypted": "0",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:                 "ceph.osd_id": "1",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:                 "ceph.type": "block",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:                 "ceph.vdo": "0"
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:             },
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:             "type": "block",
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:             "vg_name": "ceph_vg0"
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:         }
Jan 22 00:05:58 compute-0 stoic_wiles[272517]:     ]
Jan 22 00:05:58 compute-0 stoic_wiles[272517]: }
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5516 keys, 8498652 bytes, temperature: kUnknown
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040358758206, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8498652, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8462870, "index_size": 20888, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 141628, "raw_average_key_size": 25, "raw_value_size": 8364308, "raw_average_value_size": 1516, "num_data_blocks": 843, "num_entries": 5516, "num_filter_entries": 5516, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769040358, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:05:58.759343) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8498652 bytes
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:05:58.761342) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.7 rd, 77.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.3 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(18.7) write-amplify(7.8) OK, records in: 6512, records dropped: 996 output_compression: NoCompression
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:05:58.761412) EVENT_LOG_v1 {"time_micros": 1769040358761385, "job": 38, "event": "compaction_finished", "compaction_time_micros": 109147, "compaction_time_cpu_micros": 48235, "output_level": 6, "num_output_files": 1, "total_output_size": 8498652, "num_input_records": 6512, "num_output_records": 5516, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040358762478, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040358766530, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:05:58.649544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:05:58.766686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:05:58.766693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:05:58.766696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:05:58.766699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:05:58 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:05:58.766703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:05:58 compute-0 systemd[1]: libpod-bb77b35726c9092716af7062b260e6245664a5b52b1fae177d7347c904eea622.scope: Deactivated successfully.
Jan 22 00:05:58 compute-0 podman[272500]: 2026-01-22 00:05:58.770471958 +0000 UTC m=+0.959030274 container died bb77b35726c9092716af7062b260e6245664a5b52b1fae177d7347c904eea622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:05:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae311648991ddb7eb4ba3a18907eb9197cdfdadd27c59c56872271b3ea6cbbfc-merged.mount: Deactivated successfully.
Jan 22 00:05:58 compute-0 podman[272500]: 2026-01-22 00:05:58.82544146 +0000 UTC m=+1.013999796 container remove bb77b35726c9092716af7062b260e6245664a5b52b1fae177d7347c904eea622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:05:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 6 op/s
Jan 22 00:05:58 compute-0 systemd[1]: libpod-conmon-bb77b35726c9092716af7062b260e6245664a5b52b1fae177d7347c904eea622.scope: Deactivated successfully.
Jan 22 00:05:58 compute-0 sudo[272372]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:58 compute-0 sudo[272537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:05:58 compute-0 sudo[272537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:58 compute-0 sudo[272537]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:59 compute-0 sudo[272562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:05:59 compute-0 sudo[272562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:59 compute-0 sudo[272562]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:59 compute-0 sudo[272587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:05:59 compute-0 sudo[272587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:59 compute-0 sudo[272587]: pam_unix(sudo:session): session closed for user root
Jan 22 00:05:59 compute-0 sudo[272612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:05:59 compute-0 sudo[272612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:05:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:05:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:05:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:05:59.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:05:59 compute-0 podman[272679]: 2026-01-22 00:05:59.579822222 +0000 UTC m=+0.063068964 container create cb821fd21c4aea1243577371c1e8d85d0f9280cd67469713f7bd40c2776a50e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 00:05:59 compute-0 systemd[1]: Started libpod-conmon-cb821fd21c4aea1243577371c1e8d85d0f9280cd67469713f7bd40c2776a50e0.scope.
Jan 22 00:05:59 compute-0 ceph-mon[74318]: pgmap v1439: 305 pgs: 305 active+clean; 42 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 6 op/s
Jan 22 00:05:59 compute-0 podman[272679]: 2026-01-22 00:05:59.549291492 +0000 UTC m=+0.032538294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:05:59 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:05:59 compute-0 podman[272679]: 2026-01-22 00:05:59.678054597 +0000 UTC m=+0.161301379 container init cb821fd21c4aea1243577371c1e8d85d0f9280cd67469713f7bd40c2776a50e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 00:05:59 compute-0 podman[272679]: 2026-01-22 00:05:59.690942903 +0000 UTC m=+0.174189635 container start cb821fd21c4aea1243577371c1e8d85d0f9280cd67469713f7bd40c2776a50e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:05:59 compute-0 podman[272679]: 2026-01-22 00:05:59.694064739 +0000 UTC m=+0.177311481 container attach cb821fd21c4aea1243577371c1e8d85d0f9280cd67469713f7bd40c2776a50e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 00:05:59 compute-0 stupefied_kirch[272695]: 167 167
Jan 22 00:05:59 compute-0 systemd[1]: libpod-cb821fd21c4aea1243577371c1e8d85d0f9280cd67469713f7bd40c2776a50e0.scope: Deactivated successfully.
Jan 22 00:05:59 compute-0 podman[272679]: 2026-01-22 00:05:59.699898259 +0000 UTC m=+0.183145011 container died cb821fd21c4aea1243577371c1e8d85d0f9280cd67469713f7bd40c2776a50e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:05:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ef7caf4b72ddc68bc00bc208b4ca3e59bc423169e0db6840afbcdf4f15736c3-merged.mount: Deactivated successfully.
Jan 22 00:05:59 compute-0 podman[272679]: 2026-01-22 00:05:59.749393063 +0000 UTC m=+0.232639795 container remove cb821fd21c4aea1243577371c1e8d85d0f9280cd67469713f7bd40c2776a50e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 00:05:59 compute-0 systemd[1]: libpod-conmon-cb821fd21c4aea1243577371c1e8d85d0f9280cd67469713f7bd40c2776a50e0.scope: Deactivated successfully.
Jan 22 00:05:59 compute-0 podman[272721]: 2026-01-22 00:05:59.945716509 +0000 UTC m=+0.057662228 container create cb428f7c0cd7b80c2165ca26b2cfb9b2e690652798773cf70d8030c240a617b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:05:59 compute-0 systemd[1]: Started libpod-conmon-cb428f7c0cd7b80c2165ca26b2cfb9b2e690652798773cf70d8030c240a617b9.scope.
Jan 22 00:06:00 compute-0 podman[272721]: 2026-01-22 00:05:59.918442029 +0000 UTC m=+0.030387838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:06:00 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2a294f9d46eb85d5c01fb79b3f16398dee21377bbb9e3a1a62fb21db3145e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2a294f9d46eb85d5c01fb79b3f16398dee21377bbb9e3a1a62fb21db3145e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2a294f9d46eb85d5c01fb79b3f16398dee21377bbb9e3a1a62fb21db3145e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2a294f9d46eb85d5c01fb79b3f16398dee21377bbb9e3a1a62fb21db3145e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:06:00 compute-0 podman[272721]: 2026-01-22 00:06:00.049715271 +0000 UTC m=+0.161661080 container init cb428f7c0cd7b80c2165ca26b2cfb9b2e690652798773cf70d8030c240a617b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dhawan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 00:06:00 compute-0 podman[272721]: 2026-01-22 00:06:00.06236717 +0000 UTC m=+0.174312929 container start cb428f7c0cd7b80c2165ca26b2cfb9b2e690652798773cf70d8030c240a617b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:06:00 compute-0 podman[272721]: 2026-01-22 00:06:00.066351784 +0000 UTC m=+0.178297593 container attach cb428f7c0cd7b80c2165ca26b2cfb9b2e690652798773cf70d8030c240a617b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dhawan, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Jan 22 00:06:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:06:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:00.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:06:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 80 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 29 op/s
Jan 22 00:06:00 compute-0 nostalgic_dhawan[272738]: {
Jan 22 00:06:00 compute-0 nostalgic_dhawan[272738]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:06:00 compute-0 nostalgic_dhawan[272738]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:06:00 compute-0 nostalgic_dhawan[272738]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:06:00 compute-0 nostalgic_dhawan[272738]:         "osd_id": 1,
Jan 22 00:06:00 compute-0 nostalgic_dhawan[272738]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:06:00 compute-0 nostalgic_dhawan[272738]:         "type": "bluestore"
Jan 22 00:06:00 compute-0 nostalgic_dhawan[272738]:     }
Jan 22 00:06:00 compute-0 nostalgic_dhawan[272738]: }
Jan 22 00:06:00 compute-0 systemd[1]: libpod-cb428f7c0cd7b80c2165ca26b2cfb9b2e690652798773cf70d8030c240a617b9.scope: Deactivated successfully.
Jan 22 00:06:00 compute-0 podman[272721]: 2026-01-22 00:06:00.90108337 +0000 UTC m=+1.013029109 container died cb428f7c0cd7b80c2165ca26b2cfb9b2e690652798773cf70d8030c240a617b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dhawan, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 00:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd2a294f9d46eb85d5c01fb79b3f16398dee21377bbb9e3a1a62fb21db3145e2-merged.mount: Deactivated successfully.
Jan 22 00:06:00 compute-0 podman[272721]: 2026-01-22 00:06:00.961964365 +0000 UTC m=+1.073910084 container remove cb428f7c0cd7b80c2165ca26b2cfb9b2e690652798773cf70d8030c240a617b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dhawan, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:06:00 compute-0 systemd[1]: libpod-conmon-cb428f7c0cd7b80c2165ca26b2cfb9b2e690652798773cf70d8030c240a617b9.scope: Deactivated successfully.
Jan 22 00:06:00 compute-0 sudo[272612]: pam_unix(sudo:session): session closed for user root
Jan 22 00:06:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:06:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:06:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:06:01 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:06:01 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 3923b9da-7583-4892-9ec6-1b7574a75cdd does not exist
Jan 22 00:06:01 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 429e492a-6b3e-4ba6-aacf-48d910a82846 does not exist
Jan 22 00:06:01 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 8b55aa77-3648-48f3-82ce-2605ddd7dc6d does not exist
Jan 22 00:06:01 compute-0 sudo[272774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:06:01 compute-0 sudo[272774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:06:01 compute-0 sudo[272774]: pam_unix(sudo:session): session closed for user root
Jan 22 00:06:01 compute-0 sudo[272799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:06:01 compute-0 sudo[272799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:06:01 compute-0 sudo[272799]: pam_unix(sudo:session): session closed for user root
Jan 22 00:06:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:01.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 00:06:01 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2746343802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 00:06:01 compute-0 ceph-mon[74318]: pgmap v1440: 305 pgs: 305 active+clean; 80 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 29 op/s
Jan 22 00:06:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:06:01 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:06:01 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2746343802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 00:06:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Jan 22 00:06:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Jan 22 00:06:02 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Jan 22 00:06:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:02.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 22 00:06:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Jan 22 00:06:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Jan 22 00:06:03 compute-0 ceph-mon[74318]: osdmap e178: 3 total, 3 up, 3 in
Jan 22 00:06:03 compute-0 ceph-mon[74318]: pgmap v1442: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 22 00:06:03 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Jan 22 00:06:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:03.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:06:04 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Jan 22 00:06:04 compute-0 ceph-mon[74318]: osdmap e179: 3 total, 3 up, 3 in
Jan 22 00:06:04 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Jan 22 00:06:04 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Jan 22 00:06:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:04.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 111 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.6 MiB/s wr, 84 op/s
Jan 22 00:06:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Jan 22 00:06:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Jan 22 00:06:05 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Jan 22 00:06:05 compute-0 ceph-mon[74318]: osdmap e180: 3 total, 3 up, 3 in
Jan 22 00:06:05 compute-0 ceph-mon[74318]: pgmap v1445: 305 pgs: 305 active+clean; 111 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.6 MiB/s wr, 84 op/s
Jan 22 00:06:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:05.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:06 compute-0 ceph-mon[74318]: osdmap e181: 3 total, 3 up, 3 in
Jan 22 00:06:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:06:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:06.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:06:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.4 MiB/s wr, 67 op/s
Jan 22 00:06:07 compute-0 ceph-mon[74318]: pgmap v1447: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.4 MiB/s wr, 67 op/s
Jan 22 00:06:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:07.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 00:06:08 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3330399904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 00:06:08 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3330399904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 00:06:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:08.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:06:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Jan 22 00:06:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Jan 22 00:06:08 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Jan 22 00:06:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.7 MiB/s wr, 56 op/s
Jan 22 00:06:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:09.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:06:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:06:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:06:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:06:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:06:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:06:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Jan 22 00:06:09 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Jan 22 00:06:09 compute-0 ceph-mon[74318]: osdmap e182: 3 total, 3 up, 3 in
Jan 22 00:06:09 compute-0 ceph-mon[74318]: pgmap v1449: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.7 MiB/s wr, 56 op/s
Jan 22 00:06:09 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Jan 22 00:06:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:10.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:10 compute-0 ceph-mon[74318]: osdmap e183: 3 total, 3 up, 3 in
Jan 22 00:06:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 178 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.8 MiB/s wr, 53 op/s
Jan 22 00:06:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:11.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:11 compute-0 ceph-mon[74318]: pgmap v1451: 305 pgs: 305 active+clean; 178 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.8 MiB/s wr, 53 op/s
Jan 22 00:06:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:12.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 180 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.1 MiB/s wr, 85 op/s
Jan 22 00:06:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:13.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:06:13 compute-0 ceph-mon[74318]: pgmap v1452: 305 pgs: 305 active+clean; 180 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.1 MiB/s wr, 85 op/s
Jan 22 00:06:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:14.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 180 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 57 op/s
Jan 22 00:06:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:15.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:15 compute-0 sudo[272831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:06:15 compute-0 sudo[272831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:06:15 compute-0 sudo[272831]: pam_unix(sudo:session): session closed for user root
Jan 22 00:06:15 compute-0 sudo[272856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:06:15 compute-0 sudo[272856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:06:15 compute-0 sudo[272856]: pam_unix(sudo:session): session closed for user root
Jan 22 00:06:15 compute-0 ceph-mon[74318]: pgmap v1453: 305 pgs: 305 active+clean; 180 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 57 op/s
Jan 22 00:06:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:06:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:16.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:06:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 180 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 56 op/s
Jan 22 00:06:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:06:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:17.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:06:17 compute-0 ceph-mon[74318]: pgmap v1454: 305 pgs: 305 active+clean; 180 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 56 op/s
Jan 22 00:06:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:18.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:06:18 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:06:18.715 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 00:06:18 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:06:18.718 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 00:06:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 180 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 46 op/s
Jan 22 00:06:18 compute-0 nova_compute[247516]: 2026-01-22 00:06:18.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:06:18 compute-0 nova_compute[247516]: 2026-01-22 00:06:18.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:06:18 compute-0 nova_compute[247516]: 2026-01-22 00:06:18.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:06:19 compute-0 nova_compute[247516]: 2026-01-22 00:06:19.011 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:06:19 compute-0 nova_compute[247516]: 2026-01-22 00:06:19.012 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:06:19 compute-0 nova_compute[247516]: 2026-01-22 00:06:19.012 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:06:19 compute-0 ceph-mon[74318]: pgmap v1455: 305 pgs: 305 active+clean; 180 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 46 op/s
Jan 22 00:06:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:19.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:20 compute-0 podman[272884]: 2026-01-22 00:06:20.041856688 +0000 UTC m=+0.136201656 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 00:06:20 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/338056889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:06:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:06:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:20.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:06:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 180 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 825 KiB/s rd, 1.1 MiB/s wr, 37 op/s
Jan 22 00:06:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/4276820795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:06:21 compute-0 ceph-mon[74318]: pgmap v1456: 305 pgs: 305 active+clean; 180 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 825 KiB/s rd, 1.1 MiB/s wr, 37 op/s
Jan 22 00:06:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:06:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:21.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:06:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Jan 22 00:06:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Jan 22 00:06:22 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Jan 22 00:06:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3266235523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:06:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:06:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:22.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:06:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 00:06:22 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/256006311' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:06:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 00:06:22 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/256006311' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:06:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 180 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 7.0 KiB/s rd, 511 B/s wr, 8 op/s
Jan 22 00:06:23 compute-0 ceph-mon[74318]: osdmap e184: 3 total, 3 up, 3 in
Jan 22 00:06:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/945230324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:06:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/256006311' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:06:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/256006311' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:06:23 compute-0 ceph-mon[74318]: pgmap v1458: 305 pgs: 305 active+clean; 180 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 7.0 KiB/s rd, 511 B/s wr, 8 op/s
Jan 22 00:06:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 00:06:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3028972962' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:06:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 00:06:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3028972962' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:06:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:23.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:23 compute-0 nova_compute[247516]: 2026-01-22 00:06:23.606 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:06:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:06:24 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3028972962' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:06:24 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3028972962' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:06:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:06:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:24.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:06:24 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:06:24.720 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 00:06:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 123 MiB data, 287 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Jan 22 00:06:25 compute-0 nova_compute[247516]: 2026-01-22 00:06:25.005 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:06:25 compute-0 ceph-mon[74318]: pgmap v1459: 305 pgs: 305 active+clean; 123 MiB data, 287 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Jan 22 00:06:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:25.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 00:06:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1861679866' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:06:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 00:06:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1861679866' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:06:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1861679866' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:06:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1861679866' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:06:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:26.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 42 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 52 KiB/s rd, 3.6 KiB/s wr, 74 op/s
Jan 22 00:06:26 compute-0 nova_compute[247516]: 2026-01-22 00:06:26.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:06:26 compute-0 nova_compute[247516]: 2026-01-22 00:06:26.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:06:26 compute-0 nova_compute[247516]: 2026-01-22 00:06:26.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:06:27 compute-0 ceph-mon[74318]: pgmap v1460: 305 pgs: 305 active+clean; 42 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 52 KiB/s rd, 3.6 KiB/s wr, 74 op/s
Jan 22 00:06:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:06:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:27.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:06:27 compute-0 podman[272917]: 2026-01-22 00:06:27.984512345 +0000 UTC m=+0.086268108 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:06:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:28.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:06:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Jan 22 00:06:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Jan 22 00:06:28 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Jan 22 00:06:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 42 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 65 KiB/s rd, 4.5 KiB/s wr, 93 op/s
Jan 22 00:06:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:29.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:29 compute-0 ceph-mon[74318]: osdmap e185: 3 total, 3 up, 3 in
Jan 22 00:06:29 compute-0 ceph-mon[74318]: pgmap v1462: 305 pgs: 305 active+clean; 42 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 65 KiB/s rd, 4.5 KiB/s wr, 93 op/s
Jan 22 00:06:29 compute-0 nova_compute[247516]: 2026-01-22 00:06:29.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:06:29 compute-0 nova_compute[247516]: 2026-01-22 00:06:29.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:06:30 compute-0 nova_compute[247516]: 2026-01-22 00:06:30.014 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:06:30 compute-0 nova_compute[247516]: 2026-01-22 00:06:30.014 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:06:30 compute-0 nova_compute[247516]: 2026-01-22 00:06:30.014 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:06:30 compute-0 nova_compute[247516]: 2026-01-22 00:06:30.014 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:06:30 compute-0 nova_compute[247516]: 2026-01-22 00:06:30.015 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:06:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:06:30 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3735933839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:06:30 compute-0 nova_compute[247516]: 2026-01-22 00:06:30.451 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:06:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:06:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:30.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:06:30 compute-0 nova_compute[247516]: 2026-01-22 00:06:30.640 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:06:30 compute-0 nova_compute[247516]: 2026-01-22 00:06:30.641 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5152MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:06:30 compute-0 nova_compute[247516]: 2026-01-22 00:06:30.642 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:06:30 compute-0 nova_compute[247516]: 2026-01-22 00:06:30.642 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:06:30 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3735933839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:06:30 compute-0 nova_compute[247516]: 2026-01-22 00:06:30.851 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:06:30 compute-0 nova_compute[247516]: 2026-01-22 00:06:30.851 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:06:30 compute-0 nova_compute[247516]: 2026-01-22 00:06:30.852 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:06:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 42 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 52 KiB/s rd, 3.7 KiB/s wr, 76 op/s
Jan 22 00:06:30 compute-0 nova_compute[247516]: 2026-01-22 00:06:30.908 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing inventories for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 22 00:06:31 compute-0 nova_compute[247516]: 2026-01-22 00:06:31.036 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Updating ProviderTree inventory for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 22 00:06:31 compute-0 nova_compute[247516]: 2026-01-22 00:06:31.037 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Updating inventory in ProviderTree for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 00:06:31 compute-0 nova_compute[247516]: 2026-01-22 00:06:31.052 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing aggregate associations for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 22 00:06:31 compute-0 nova_compute[247516]: 2026-01-22 00:06:31.088 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing trait associations for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 22 00:06:31 compute-0 nova_compute[247516]: 2026-01-22 00:06:31.125 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:06:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:31.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:06:31 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186014439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:06:31 compute-0 nova_compute[247516]: 2026-01-22 00:06:31.623 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:06:31 compute-0 nova_compute[247516]: 2026-01-22 00:06:31.628 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:06:31 compute-0 nova_compute[247516]: 2026-01-22 00:06:31.695 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:06:31 compute-0 nova_compute[247516]: 2026-01-22 00:06:31.696 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:06:31 compute-0 nova_compute[247516]: 2026-01-22 00:06:31.697 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:06:31 compute-0 ceph-mon[74318]: pgmap v1463: 305 pgs: 305 active+clean; 42 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 52 KiB/s rd, 3.7 KiB/s wr, 76 op/s
Jan 22 00:06:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1186014439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:06:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:32.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 3.2 KiB/s wr, 66 op/s
Jan 22 00:06:32 compute-0 ceph-mon[74318]: pgmap v1464: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 3.2 KiB/s wr, 66 op/s
Jan 22 00:06:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:33.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:06:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:34.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:34 compute-0 nova_compute[247516]: 2026-01-22 00:06:34.698 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:06:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 2.2 KiB/s wr, 44 op/s
Jan 22 00:06:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:06:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:35.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:06:35 compute-0 sudo[272985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:06:35 compute-0 sudo[272985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:06:35 compute-0 sudo[272985]: pam_unix(sudo:session): session closed for user root
Jan 22 00:06:35 compute-0 sudo[273010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:06:35 compute-0 sudo[273010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:06:35 compute-0 sudo[273010]: pam_unix(sudo:session): session closed for user root
Jan 22 00:06:35 compute-0 ceph-mon[74318]: pgmap v1465: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 2.2 KiB/s wr, 44 op/s
Jan 22 00:06:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:36.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:06:36 compute-0 ceph-mon[74318]: pgmap v1466: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:06:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:37.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:06:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:38.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:06:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:06:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:06:38 compute-0 ceph-mon[74318]: pgmap v1467: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:06:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:39.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:06:39
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'backups', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', '.mgr']
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:06:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:06:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:06:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:40.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:06:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:06:40 compute-0 ceph-mon[74318]: pgmap v1468: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:06:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:41.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:06:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:42.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:06:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:06:42 compute-0 ceph-mon[74318]: pgmap v1469: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:06:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:43.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:06:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:44.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:06:44 compute-0 ceph-mon[74318]: pgmap v1470: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:06:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:45.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:46.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:06:46 compute-0 ceph-mon[74318]: pgmap v1471: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:06:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:47.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 22 00:06:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 22 00:06:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 22 00:06:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 22 00:06:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 22 00:06:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 22 00:06:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 22 00:06:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 22 00:06:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:06:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:48.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:06:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:06:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:06:48.766 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:06:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:06:48.768 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:06:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:06:48.768 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:06:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:06:48 compute-0 ceph-mon[74318]: pgmap v1472: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:06:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:06:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:49.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:06:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:06:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:50.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:06:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 104 op/s
Jan 22 00:06:50 compute-0 ceph-mon[74318]: pgmap v1473: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 104 op/s
Jan 22 00:06:50 compute-0 podman[273042]: 2026-01-22 00:06:50.994602183 +0000 UTC m=+0.109423411 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 00:06:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:51.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:52.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail; 92 KiB/s rd, 0 B/s wr, 153 op/s
Jan 22 00:06:52 compute-0 ceph-mon[74318]: pgmap v1474: 305 pgs: 305 active+clean; 42 MiB data, 257 MiB used, 21 GiB / 21 GiB avail; 92 KiB/s rd, 0 B/s wr, 153 op/s
Jan 22 00:06:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:53.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:06:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:54.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.217749627768472e-05 of space, bias 1.0, pg target 0.003653248883305416 quantized to 32 (current 32)
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:06:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 99 KiB/s rd, 0 B/s wr, 165 op/s
Jan 22 00:06:54 compute-0 ceph-mon[74318]: pgmap v1475: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 99 KiB/s rd, 0 B/s wr, 165 op/s
Jan 22 00:06:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:55.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:55 compute-0 sudo[273070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:06:55 compute-0 sudo[273070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:06:55 compute-0 sudo[273070]: pam_unix(sudo:session): session closed for user root
Jan 22 00:06:55 compute-0 sudo[273095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:06:55 compute-0 sudo[273095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:06:55 compute-0 sudo[273095]: pam_unix(sudo:session): session closed for user root
Jan 22 00:06:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.002000063s ======
Jan 22 00:06:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:56.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Jan 22 00:06:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Jan 22 00:06:56 compute-0 ceph-mon[74318]: pgmap v1476: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Jan 22 00:06:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:57.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:06:58.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:06:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:06:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Jan 22 00:06:58 compute-0 podman[273121]: 2026-01-22 00:06:58.970329559 +0000 UTC m=+0.079999234 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 00:06:58 compute-0 ceph-mon[74318]: pgmap v1477: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Jan 22 00:06:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:06:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:06:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:06:59.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:00.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Jan 22 00:07:00 compute-0 ceph-mon[74318]: pgmap v1478: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Jan 22 00:07:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:07:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:01.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:07:01 compute-0 sudo[273143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:07:01 compute-0 sudo[273143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:01 compute-0 sudo[273143]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:01 compute-0 sudo[273168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:07:01 compute-0 sudo[273168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:01 compute-0 sudo[273168]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:01 compute-0 sudo[273193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:07:01 compute-0 sudo[273193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:01 compute-0 sudo[273193]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:01 compute-0 sudo[273218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 00:07:01 compute-0 sudo[273218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 00:07:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:07:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 00:07:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:07:02 compute-0 sudo[273218]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 22 00:07:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 00:07:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 22 00:07:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 00:07:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:02.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 67 op/s
Jan 22 00:07:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:07:03 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:07:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:07:03 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:07:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:07:03 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:07:03 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:07:03 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 00:07:03 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 00:07:03 compute-0 ceph-mon[74318]: pgmap v1479: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 67 op/s
Jan 22 00:07:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:03.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:03 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:07:03 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 2d6d91c1-00d1-4cec-8b31-4c092567ecb4 does not exist
Jan 22 00:07:03 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 59af814d-fe8b-437f-b487-2dd383f0a8f0 does not exist
Jan 22 00:07:03 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev d307c1cb-4568-4b1b-a0ec-30eb257ffa17 does not exist
Jan 22 00:07:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:07:03 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:07:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:07:03 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:07:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:07:03 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:07:03 compute-0 sudo[273275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:07:03 compute-0 sudo[273275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:03 compute-0 sudo[273275]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:03 compute-0 sudo[273301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:07:03 compute-0 sudo[273301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:03 compute-0 sudo[273301]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:03 compute-0 sudo[273326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:07:03 compute-0 sudo[273326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:03 compute-0 sudo[273326]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:03 compute-0 sudo[273351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:07:03 compute-0 sudo[273351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:07:04 compute-0 podman[273417]: 2026-01-22 00:07:03.999066664 +0000 UTC m=+0.054004314 container create 422510da60b6045d3e0ec769f84052f08364c7cab34bcf0615aae0d1ec3e6359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 00:07:04 compute-0 systemd[1]: Started libpod-conmon-422510da60b6045d3e0ec769f84052f08364c7cab34bcf0615aae0d1ec3e6359.scope.
Jan 22 00:07:04 compute-0 podman[273417]: 2026-01-22 00:07:03.975043045 +0000 UTC m=+0.029980695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:07:04 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:07:04 compute-0 podman[273417]: 2026-01-22 00:07:04.109768073 +0000 UTC m=+0.164705723 container init 422510da60b6045d3e0ec769f84052f08364c7cab34bcf0615aae0d1ec3e6359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:07:04 compute-0 podman[273417]: 2026-01-22 00:07:04.121256767 +0000 UTC m=+0.176194377 container start 422510da60b6045d3e0ec769f84052f08364c7cab34bcf0615aae0d1ec3e6359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 00:07:04 compute-0 podman[273417]: 2026-01-22 00:07:04.128389107 +0000 UTC m=+0.183326757 container attach 422510da60b6045d3e0ec769f84052f08364c7cab34bcf0615aae0d1ec3e6359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 00:07:04 compute-0 awesome_villani[273433]: 167 167
Jan 22 00:07:04 compute-0 systemd[1]: libpod-422510da60b6045d3e0ec769f84052f08364c7cab34bcf0615aae0d1ec3e6359.scope: Deactivated successfully.
Jan 22 00:07:04 compute-0 conmon[273433]: conmon 422510da60b6045d3e0e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-422510da60b6045d3e0ec769f84052f08364c7cab34bcf0615aae0d1ec3e6359.scope/container/memory.events
Jan 22 00:07:04 compute-0 podman[273417]: 2026-01-22 00:07:04.129896582 +0000 UTC m=+0.184834192 container died 422510da60b6045d3e0ec769f84052f08364c7cab34bcf0615aae0d1ec3e6359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fac292252186f397fed571dff1644a2e79ebd87f0fcbad9f43914911fd18f3d-merged.mount: Deactivated successfully.
Jan 22 00:07:04 compute-0 podman[273417]: 2026-01-22 00:07:04.176200619 +0000 UTC m=+0.231138229 container remove 422510da60b6045d3e0ec769f84052f08364c7cab34bcf0615aae0d1ec3e6359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:07:04 compute-0 systemd[1]: libpod-conmon-422510da60b6045d3e0ec769f84052f08364c7cab34bcf0615aae0d1ec3e6359.scope: Deactivated successfully.
Jan 22 00:07:04 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:07:04 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:07:04 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:07:04 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:07:04 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:07:04 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:07:04 compute-0 podman[273457]: 2026-01-22 00:07:04.355735138 +0000 UTC m=+0.050335621 container create 929949d448201da0e2e47551ae0407b6f1a06e9ad15467300b1afd7776770d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:07:04 compute-0 systemd[1]: Started libpod-conmon-929949d448201da0e2e47551ae0407b6f1a06e9ad15467300b1afd7776770d3b.scope.
Jan 22 00:07:04 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:07:04 compute-0 podman[273457]: 2026-01-22 00:07:04.332908315 +0000 UTC m=+0.027508868 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe044d991454ca79a11cb3511fb2dc09ce048d48d729640083e9526b664ffed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe044d991454ca79a11cb3511fb2dc09ce048d48d729640083e9526b664ffed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe044d991454ca79a11cb3511fb2dc09ce048d48d729640083e9526b664ffed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe044d991454ca79a11cb3511fb2dc09ce048d48d729640083e9526b664ffed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe044d991454ca79a11cb3511fb2dc09ce048d48d729640083e9526b664ffed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:07:04 compute-0 podman[273457]: 2026-01-22 00:07:04.440096046 +0000 UTC m=+0.134696589 container init 929949d448201da0e2e47551ae0407b6f1a06e9ad15467300b1afd7776770d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tharp, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:07:04 compute-0 podman[273457]: 2026-01-22 00:07:04.453375624 +0000 UTC m=+0.147976117 container start 929949d448201da0e2e47551ae0407b6f1a06e9ad15467300b1afd7776770d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tharp, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 00:07:04 compute-0 podman[273457]: 2026-01-22 00:07:04.458404439 +0000 UTC m=+0.153004922 container attach 929949d448201da0e2e47551ae0407b6f1a06e9ad15467300b1afd7776770d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 00:07:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:04.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 22 00:07:05 compute-0 epic_tharp[273473]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:07:05 compute-0 epic_tharp[273473]: --> relative data size: 1.0
Jan 22 00:07:05 compute-0 epic_tharp[273473]: --> All data devices are unavailable
Jan 22 00:07:05 compute-0 ceph-mon[74318]: pgmap v1480: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 22 00:07:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:07:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:05.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:07:05 compute-0 systemd[1]: libpod-929949d448201da0e2e47551ae0407b6f1a06e9ad15467300b1afd7776770d3b.scope: Deactivated successfully.
Jan 22 00:07:05 compute-0 podman[273488]: 2026-01-22 00:07:05.343959359 +0000 UTC m=+0.024279699 container died 929949d448201da0e2e47551ae0407b6f1a06e9ad15467300b1afd7776770d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 00:07:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fe044d991454ca79a11cb3511fb2dc09ce048d48d729640083e9526b664ffed-merged.mount: Deactivated successfully.
Jan 22 00:07:05 compute-0 podman[273488]: 2026-01-22 00:07:05.405133553 +0000 UTC m=+0.085453803 container remove 929949d448201da0e2e47551ae0407b6f1a06e9ad15467300b1afd7776770d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tharp, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:07:05 compute-0 systemd[1]: libpod-conmon-929949d448201da0e2e47551ae0407b6f1a06e9ad15467300b1afd7776770d3b.scope: Deactivated successfully.
Jan 22 00:07:05 compute-0 sudo[273351]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:05 compute-0 sudo[273503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:07:05 compute-0 sudo[273503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:05 compute-0 sudo[273503]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:05 compute-0 sudo[273528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:07:05 compute-0 sudo[273528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:05 compute-0 sudo[273528]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:05 compute-0 sudo[273553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:07:05 compute-0 sudo[273553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:05 compute-0 sudo[273553]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:05 compute-0 sudo[273578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:07:05 compute-0 sudo[273578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:06 compute-0 podman[273645]: 2026-01-22 00:07:06.273451503 +0000 UTC m=+0.071690159 container create 459fd5d724812979d5c5fd2da2b5520e9699ebbcbb5c0d473467418bf36ac47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:07:06 compute-0 systemd[1]: Started libpod-conmon-459fd5d724812979d5c5fd2da2b5520e9699ebbcbb5c0d473467418bf36ac47c.scope.
Jan 22 00:07:06 compute-0 podman[273645]: 2026-01-22 00:07:06.242382636 +0000 UTC m=+0.040621342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:07:06 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:07:06 compute-0 podman[273645]: 2026-01-22 00:07:06.395732728 +0000 UTC m=+0.193971424 container init 459fd5d724812979d5c5fd2da2b5520e9699ebbcbb5c0d473467418bf36ac47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:07:06 compute-0 podman[273645]: 2026-01-22 00:07:06.406614533 +0000 UTC m=+0.204853189 container start 459fd5d724812979d5c5fd2da2b5520e9699ebbcbb5c0d473467418bf36ac47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:07:06 compute-0 podman[273645]: 2026-01-22 00:07:06.410860964 +0000 UTC m=+0.209099620 container attach 459fd5d724812979d5c5fd2da2b5520e9699ebbcbb5c0d473467418bf36ac47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:07:06 compute-0 affectionate_murdock[273661]: 167 167
Jan 22 00:07:06 compute-0 systemd[1]: libpod-459fd5d724812979d5c5fd2da2b5520e9699ebbcbb5c0d473467418bf36ac47c.scope: Deactivated successfully.
Jan 22 00:07:06 compute-0 conmon[273661]: conmon 459fd5d724812979d5c5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-459fd5d724812979d5c5fd2da2b5520e9699ebbcbb5c0d473467418bf36ac47c.scope/container/memory.events
Jan 22 00:07:06 compute-0 podman[273645]: 2026-01-22 00:07:06.417738366 +0000 UTC m=+0.215977042 container died 459fd5d724812979d5c5fd2da2b5520e9699ebbcbb5c0d473467418bf36ac47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:07:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-75774e04074bf5ce5b28e0b00d2446106450c50763a5654fffb7c201e8a4f665-merged.mount: Deactivated successfully.
Jan 22 00:07:06 compute-0 podman[273645]: 2026-01-22 00:07:06.466791947 +0000 UTC m=+0.265030593 container remove 459fd5d724812979d5c5fd2da2b5520e9699ebbcbb5c0d473467418bf36ac47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 00:07:06 compute-0 systemd[1]: libpod-conmon-459fd5d724812979d5c5fd2da2b5520e9699ebbcbb5c0d473467418bf36ac47c.scope: Deactivated successfully.
Jan 22 00:07:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:07:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:06.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:07:06 compute-0 podman[273685]: 2026-01-22 00:07:06.713986878 +0000 UTC m=+0.052313652 container create f6ea63d847b716612c474b88532abae81c71156831086c2d1b9f093ca6e25378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khayyam, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:07:06 compute-0 systemd[1]: Started libpod-conmon-f6ea63d847b716612c474b88532abae81c71156831086c2d1b9f093ca6e25378.scope.
Jan 22 00:07:06 compute-0 podman[273685]: 2026-01-22 00:07:06.691848776 +0000 UTC m=+0.030175580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:07:06 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1e6d0c71c3846381e71bdb22e4b0389dc26a8c36a46ead8c9e55c670558b816/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1e6d0c71c3846381e71bdb22e4b0389dc26a8c36a46ead8c9e55c670558b816/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1e6d0c71c3846381e71bdb22e4b0389dc26a8c36a46ead8c9e55c670558b816/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1e6d0c71c3846381e71bdb22e4b0389dc26a8c36a46ead8c9e55c670558b816/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:07:06 compute-0 podman[273685]: 2026-01-22 00:07:06.816530637 +0000 UTC m=+0.154857431 container init f6ea63d847b716612c474b88532abae81c71156831086c2d1b9f093ca6e25378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 00:07:06 compute-0 podman[273685]: 2026-01-22 00:07:06.828410293 +0000 UTC m=+0.166737067 container start f6ea63d847b716612c474b88532abae81c71156831086c2d1b9f093ca6e25378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 00:07:06 compute-0 podman[273685]: 2026-01-22 00:07:06.83319834 +0000 UTC m=+0.171525114 container attach f6ea63d847b716612c474b88532abae81c71156831086c2d1b9f093ca6e25378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khayyam, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:07:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Jan 22 00:07:06 compute-0 ceph-mon[74318]: pgmap v1481: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Jan 22 00:07:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:07.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:07 compute-0 epic_khayyam[273701]: {
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:     "1": [
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:         {
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:             "devices": [
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:                 "/dev/loop3"
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:             ],
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:             "lv_name": "ceph_lv0",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:             "lv_size": "7511998464",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:             "name": "ceph_lv0",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:             "tags": {
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:                 "ceph.cluster_name": "ceph",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:                 "ceph.crush_device_class": "",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:                 "ceph.encrypted": "0",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:                 "ceph.osd_id": "1",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:                 "ceph.type": "block",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:                 "ceph.vdo": "0"
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:             },
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:             "type": "block",
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:             "vg_name": "ceph_vg0"
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:         }
Jan 22 00:07:07 compute-0 epic_khayyam[273701]:     ]
Jan 22 00:07:07 compute-0 epic_khayyam[273701]: }
Jan 22 00:07:07 compute-0 podman[273685]: 2026-01-22 00:07:07.662376044 +0000 UTC m=+1.000702838 container died f6ea63d847b716612c474b88532abae81c71156831086c2d1b9f093ca6e25378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Jan 22 00:07:07 compute-0 systemd[1]: libpod-f6ea63d847b716612c474b88532abae81c71156831086c2d1b9f093ca6e25378.scope: Deactivated successfully.
Jan 22 00:07:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1e6d0c71c3846381e71bdb22e4b0389dc26a8c36a46ead8c9e55c670558b816-merged.mount: Deactivated successfully.
Jan 22 00:07:07 compute-0 podman[273685]: 2026-01-22 00:07:07.745325858 +0000 UTC m=+1.083652672 container remove f6ea63d847b716612c474b88532abae81c71156831086c2d1b9f093ca6e25378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_khayyam, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:07:07 compute-0 systemd[1]: libpod-conmon-f6ea63d847b716612c474b88532abae81c71156831086c2d1b9f093ca6e25378.scope: Deactivated successfully.
Jan 22 00:07:07 compute-0 sudo[273578]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:07 compute-0 sudo[273725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:07:07 compute-0 sudo[273725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:07 compute-0 sudo[273725]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:07 compute-0 sudo[273750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:07:07 compute-0 sudo[273750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:07 compute-0 sudo[273750]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:08 compute-0 sudo[273775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:07:08 compute-0 sudo[273775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:08 compute-0 sudo[273775]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:08 compute-0 sudo[273800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:07:08 compute-0 sudo[273800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:07:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:08.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:07:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:07:08 compute-0 podman[273866]: 2026-01-22 00:07:08.647674355 +0000 UTC m=+0.065993564 container create 158d9812a651923b8154ee19e322d6a32d73c99647e6f2c294a533c41adec15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 00:07:08 compute-0 systemd[1]: Started libpod-conmon-158d9812a651923b8154ee19e322d6a32d73c99647e6f2c294a533c41adec15c.scope.
Jan 22 00:07:08 compute-0 podman[273866]: 2026-01-22 00:07:08.621680104 +0000 UTC m=+0.039999373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:07:08 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:07:08 compute-0 podman[273866]: 2026-01-22 00:07:08.741413231 +0000 UTC m=+0.159732480 container init 158d9812a651923b8154ee19e322d6a32d73c99647e6f2c294a533c41adec15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_boyd, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:07:08 compute-0 podman[273866]: 2026-01-22 00:07:08.751989807 +0000 UTC m=+0.170309016 container start 158d9812a651923b8154ee19e322d6a32d73c99647e6f2c294a533c41adec15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_boyd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:07:08 compute-0 podman[273866]: 2026-01-22 00:07:08.755999941 +0000 UTC m=+0.174319160 container attach 158d9812a651923b8154ee19e322d6a32d73c99647e6f2c294a533c41adec15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_boyd, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:07:08 compute-0 trusting_boyd[273882]: 167 167
Jan 22 00:07:08 compute-0 systemd[1]: libpod-158d9812a651923b8154ee19e322d6a32d73c99647e6f2c294a533c41adec15c.scope: Deactivated successfully.
Jan 22 00:07:08 compute-0 podman[273866]: 2026-01-22 00:07:08.759404026 +0000 UTC m=+0.177723245 container died 158d9812a651923b8154ee19e322d6a32d73c99647e6f2c294a533c41adec15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_boyd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 00:07:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-082b0cf5225021de7c4e671adbf56fb246dbb26171e461ebd535ef1ccb0002d0-merged.mount: Deactivated successfully.
Jan 22 00:07:08 compute-0 podman[273866]: 2026-01-22 00:07:08.813844812 +0000 UTC m=+0.232164031 container remove 158d9812a651923b8154ee19e322d6a32d73c99647e6f2c294a533c41adec15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 00:07:08 compute-0 systemd[1]: libpod-conmon-158d9812a651923b8154ee19e322d6a32d73c99647e6f2c294a533c41adec15c.scope: Deactivated successfully.
Jan 22 00:07:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:07:09 compute-0 podman[273906]: 2026-01-22 00:07:09.083700212 +0000 UTC m=+0.067813550 container create a6a34ade328460dee3da6943a7a7292057d3c0fe6556a294b0767c525a04beff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:07:09 compute-0 systemd[1]: Started libpod-conmon-a6a34ade328460dee3da6943a7a7292057d3c0fe6556a294b0767c525a04beff.scope.
Jan 22 00:07:09 compute-0 podman[273906]: 2026-01-22 00:07:09.056180945 +0000 UTC m=+0.040294323 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:07:09 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:07:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c61d54c3d7b04a97c8fd472330708cfcbf599f647dcd37d132ce8ab2eff46a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:07:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c61d54c3d7b04a97c8fd472330708cfcbf599f647dcd37d132ce8ab2eff46a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:07:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c61d54c3d7b04a97c8fd472330708cfcbf599f647dcd37d132ce8ab2eff46a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:07:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c61d54c3d7b04a97c8fd472330708cfcbf599f647dcd37d132ce8ab2eff46a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:07:09 compute-0 ceph-mon[74318]: pgmap v1482: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:07:09 compute-0 podman[273906]: 2026-01-22 00:07:09.203029356 +0000 UTC m=+0.187142744 container init a6a34ade328460dee3da6943a7a7292057d3c0fe6556a294b0767c525a04beff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bell, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:07:09 compute-0 podman[273906]: 2026-01-22 00:07:09.215742889 +0000 UTC m=+0.199856217 container start a6a34ade328460dee3da6943a7a7292057d3c0fe6556a294b0767c525a04beff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Jan 22 00:07:09 compute-0 podman[273906]: 2026-01-22 00:07:09.219763562 +0000 UTC m=+0.203876910 container attach a6a34ade328460dee3da6943a7a7292057d3c0fe6556a294b0767c525a04beff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bell, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 00:07:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:07:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:07:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:07:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:07:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:07:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:07:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:09.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:10 compute-0 crazy_bell[273922]: {
Jan 22 00:07:10 compute-0 crazy_bell[273922]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:07:10 compute-0 crazy_bell[273922]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:07:10 compute-0 crazy_bell[273922]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:07:10 compute-0 crazy_bell[273922]:         "osd_id": 1,
Jan 22 00:07:10 compute-0 crazy_bell[273922]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:07:10 compute-0 crazy_bell[273922]:         "type": "bluestore"
Jan 22 00:07:10 compute-0 crazy_bell[273922]:     }
Jan 22 00:07:10 compute-0 crazy_bell[273922]: }
Jan 22 00:07:10 compute-0 systemd[1]: libpod-a6a34ade328460dee3da6943a7a7292057d3c0fe6556a294b0767c525a04beff.scope: Deactivated successfully.
Jan 22 00:07:10 compute-0 podman[273906]: 2026-01-22 00:07:10.140794274 +0000 UTC m=+1.124907612 container died a6a34ade328460dee3da6943a7a7292057d3c0fe6556a294b0767c525a04beff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:07:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c61d54c3d7b04a97c8fd472330708cfcbf599f647dcd37d132ce8ab2eff46a2-merged.mount: Deactivated successfully.
Jan 22 00:07:10 compute-0 podman[273906]: 2026-01-22 00:07:10.212299656 +0000 UTC m=+1.196412984 container remove a6a34ade328460dee3da6943a7a7292057d3c0fe6556a294b0767c525a04beff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:07:10 compute-0 systemd[1]: libpod-conmon-a6a34ade328460dee3da6943a7a7292057d3c0fe6556a294b0767c525a04beff.scope: Deactivated successfully.
Jan 22 00:07:10 compute-0 sudo[273800]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:07:10 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:07:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:07:10 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:07:10 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 5d4abcd1-764f-4c39-8a14-b83d72b981ae does not exist
Jan 22 00:07:10 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 44cffac0-ba7a-477c-bc4f-c098fe8835d5 does not exist
Jan 22 00:07:10 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 8653c653-2709-4b17-8cd3-d072f736d897 does not exist
Jan 22 00:07:10 compute-0 sudo[273957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:07:10 compute-0 sudo[273957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:10 compute-0 sudo[273957]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:10 compute-0 sudo[273982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:07:10 compute-0 sudo[273982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:10 compute-0 sudo[273982]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:10.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:07:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:07:11 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:07:11 compute-0 ceph-mon[74318]: pgmap v1483: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:07:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:11.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:12.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 op/s
Jan 22 00:07:12 compute-0 ceph-mon[74318]: pgmap v1484: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 op/s
Jan 22 00:07:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:13.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:07:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:14.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 2 op/s
Jan 22 00:07:14 compute-0 ceph-mon[74318]: pgmap v1485: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 2 op/s
Jan 22 00:07:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:15.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:15 compute-0 sudo[274010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:07:15 compute-0 sudo[274010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:15 compute-0 sudo[274010]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:15 compute-0 sudo[274035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:07:15 compute-0 sudo[274035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:15 compute-0 sudo[274035]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:16.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 00:07:16 compute-0 ceph-mon[74318]: pgmap v1486: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 00:07:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:17.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:18.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:07:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 00:07:18 compute-0 ceph-mon[74318]: pgmap v1487: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 00:07:18 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/148502073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:07:19 compute-0 nova_compute[247516]: 2026-01-22 00:07:18.994 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:07:19 compute-0 nova_compute[247516]: 2026-01-22 00:07:18.996 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:07:19 compute-0 nova_compute[247516]: 2026-01-22 00:07:18.997 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:07:19 compute-0 nova_compute[247516]: 2026-01-22 00:07:19.018 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:07:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:19.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:19 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/4193016375' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:07:19 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/4193016375' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:07:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:20.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 22 KiB/s wr, 19 op/s
Jan 22 00:07:20 compute-0 nova_compute[247516]: 2026-01-22 00:07:20.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:07:20 compute-0 nova_compute[247516]: 2026-01-22 00:07:20.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:07:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/4246150456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:07:21 compute-0 ceph-mon[74318]: pgmap v1488: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 22 KiB/s wr, 19 op/s
Jan 22 00:07:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:07:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:21.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:07:21 compute-0 podman[274063]: 2026-01-22 00:07:21.996423221 +0000 UTC m=+0.109019648 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 22 00:07:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:22.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 22 KiB/s wr, 19 op/s
Jan 22 00:07:22 compute-0 ceph-mon[74318]: pgmap v1489: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 22 KiB/s wr, 19 op/s
Jan 22 00:07:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:07:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:23.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:07:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:07:23 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:07:23.888 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 00:07:23 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:07:23.891 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 00:07:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2970852605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:07:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:24.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 20 op/s
Jan 22 00:07:24 compute-0 nova_compute[247516]: 2026-01-22 00:07:24.988 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:07:24 compute-0 nova_compute[247516]: 2026-01-22 00:07:24.989 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:07:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2288272400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:07:25 compute-0 ceph-mon[74318]: pgmap v1490: 305 pgs: 305 active+clean; 42 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 20 op/s
Jan 22 00:07:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:25.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2327086135' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:07:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2327086135' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:07:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:07:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:26.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:07:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 42 MiB data, 265 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 24 op/s
Jan 22 00:07:27 compute-0 ceph-mon[74318]: pgmap v1491: 305 pgs: 305 active+clean; 42 MiB data, 265 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 24 op/s
Jan 22 00:07:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:07:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:27.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:07:27 compute-0 nova_compute[247516]: 2026-01-22 00:07:27.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:07:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:28.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:07:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 42 MiB data, 265 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 767 B/s wr, 22 op/s
Jan 22 00:07:28 compute-0 nova_compute[247516]: 2026-01-22 00:07:28.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:07:28 compute-0 nova_compute[247516]: 2026-01-22 00:07:28.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:07:29 compute-0 ceph-mon[74318]: pgmap v1492: 305 pgs: 305 active+clean; 42 MiB data, 265 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 767 B/s wr, 22 op/s
Jan 22 00:07:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:29.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:29 compute-0 podman[274094]: 2026-01-22 00:07:29.967906176 +0000 UTC m=+0.076236229 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 00:07:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:30.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 74 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 52 op/s
Jan 22 00:07:30 compute-0 ceph-mon[74318]: pgmap v1493: 305 pgs: 305 active+clean; 74 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 52 op/s
Jan 22 00:07:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:31.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Jan 22 00:07:31 compute-0 nova_compute[247516]: 2026-01-22 00:07:31.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:07:31 compute-0 nova_compute[247516]: 2026-01-22 00:07:31.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:07:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Jan 22 00:07:32 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Jan 22 00:07:32 compute-0 nova_compute[247516]: 2026-01-22 00:07:32.025 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:07:32 compute-0 nova_compute[247516]: 2026-01-22 00:07:32.026 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:07:32 compute-0 nova_compute[247516]: 2026-01-22 00:07:32.026 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:07:32 compute-0 nova_compute[247516]: 2026-01-22 00:07:32.026 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:07:32 compute-0 nova_compute[247516]: 2026-01-22 00:07:32.027 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:07:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:07:32 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/461200345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:07:32 compute-0 nova_compute[247516]: 2026-01-22 00:07:32.485 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:07:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:07:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:32.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:07:32 compute-0 nova_compute[247516]: 2026-01-22 00:07:32.674 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:07:32 compute-0 nova_compute[247516]: 2026-01-22 00:07:32.675 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5129MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:07:32 compute-0 nova_compute[247516]: 2026-01-22 00:07:32.676 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:07:32 compute-0 nova_compute[247516]: 2026-01-22 00:07:32.676 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:07:32 compute-0 nova_compute[247516]: 2026-01-22 00:07:32.802 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:07:32 compute-0 nova_compute[247516]: 2026-01-22 00:07:32.803 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:07:32 compute-0 nova_compute[247516]: 2026-01-22 00:07:32.803 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:07:32 compute-0 nova_compute[247516]: 2026-01-22 00:07:32.884 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:07:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 22 00:07:32 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:07:32.893 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 00:07:33 compute-0 ceph-mon[74318]: osdmap e186: 3 total, 3 up, 3 in
Jan 22 00:07:33 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/461200345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:07:33 compute-0 ceph-mon[74318]: pgmap v1495: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 22 00:07:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:07:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:33.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:07:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:07:33 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1973656777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:07:33 compute-0 nova_compute[247516]: 2026-01-22 00:07:33.496 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:07:33 compute-0 nova_compute[247516]: 2026-01-22 00:07:33.502 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:07:33 compute-0 nova_compute[247516]: 2026-01-22 00:07:33.529 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:07:33 compute-0 nova_compute[247516]: 2026-01-22 00:07:33.531 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:07:33 compute-0 nova_compute[247516]: 2026-01-22 00:07:33.531 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:07:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:07:34 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1973656777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:07:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:07:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:34.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:07:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Jan 22 00:07:35 compute-0 ceph-mon[74318]: pgmap v1496: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Jan 22 00:07:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:35.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:36 compute-0 sudo[274161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:07:36 compute-0 sudo[274161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:36 compute-0 sudo[274161]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:36 compute-0 sudo[274186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:07:36 compute-0 sudo[274186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:36 compute-0 sudo[274186]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:36 compute-0 nova_compute[247516]: 2026-01-22 00:07:36.531 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:07:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:36.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 22 00:07:36 compute-0 ceph-mon[74318]: pgmap v1497: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 22 00:07:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:37.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:38.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:07:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:07:39
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'images', 'default.rgw.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr']
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:07:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:39.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:39 compute-0 ceph-mon[74318]: pgmap v1498: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:07:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:07:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Jan 22 00:07:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Jan 22 00:07:40 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Jan 22 00:07:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:40.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 806 B/s wr, 14 op/s
Jan 22 00:07:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:07:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:41.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:07:41 compute-0 ceph-mon[74318]: osdmap e187: 3 total, 3 up, 3 in
Jan 22 00:07:41 compute-0 ceph-mon[74318]: pgmap v1500: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 806 B/s wr, 14 op/s
Jan 22 00:07:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 00:07:42 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3402915840' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:07:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 00:07:42 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3402915840' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:07:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:42.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:42 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3402915840' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:07:42 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3402915840' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:07:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 21 GiB / 21 GiB avail; 9.6 KiB/s rd, 716 B/s wr, 12 op/s
Jan 22 00:07:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:43.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:07:43 compute-0 ceph-mon[74318]: pgmap v1501: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 21 GiB / 21 GiB avail; 9.6 KiB/s rd, 716 B/s wr, 12 op/s
Jan 22 00:07:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:07:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:44.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:07:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 66 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.1 KiB/s wr, 25 op/s
Jan 22 00:07:44 compute-0 ceph-mon[74318]: pgmap v1502: 305 pgs: 305 active+clean; 66 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.1 KiB/s wr, 25 op/s
Jan 22 00:07:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:45.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:07:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:46.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:07:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 42 MiB data, 250 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Jan 22 00:07:46 compute-0 ceph-mon[74318]: pgmap v1503: 305 pgs: 305 active+clean; 42 MiB data, 250 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Jan 22 00:07:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:47.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:48.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:07:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Jan 22 00:07:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Jan 22 00:07:48 compute-0 ceph-mon[74318]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Jan 22 00:07:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:07:48.768 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:07:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:07:48.768 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:07:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:07:48.769 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:07:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 42 MiB data, 250 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Jan 22 00:07:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:07:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:49.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:07:49 compute-0 ceph-mon[74318]: osdmap e188: 3 total, 3 up, 3 in
Jan 22 00:07:49 compute-0 ceph-mon[74318]: pgmap v1505: 305 pgs: 305 active+clean; 42 MiB data, 250 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:07:49.714400) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040469714520, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1363, "num_deletes": 254, "total_data_size": 2182204, "memory_usage": 2214672, "flush_reason": "Manual Compaction"}
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040469739347, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 2144847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32452, "largest_seqno": 33814, "table_properties": {"data_size": 2138376, "index_size": 3670, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14033, "raw_average_key_size": 20, "raw_value_size": 2125255, "raw_average_value_size": 3102, "num_data_blocks": 161, "num_entries": 685, "num_filter_entries": 685, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769040359, "oldest_key_time": 1769040359, "file_creation_time": 1769040469, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 24999 microseconds, and 11056 cpu microseconds.
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:07:49.739446) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 2144847 bytes OK
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:07:49.739473) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:07:49.741765) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:07:49.741787) EVENT_LOG_v1 {"time_micros": 1769040469741780, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:07:49.741810) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2176213, prev total WAL file size 2176213, number of live WAL files 2.
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:07:49.743022) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(2094KB)], [71(8299KB)]
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040469743184, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 10643499, "oldest_snapshot_seqno": -1}
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5676 keys, 8728935 bytes, temperature: kUnknown
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040469832952, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 8728935, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8691716, "index_size": 21941, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 145795, "raw_average_key_size": 25, "raw_value_size": 8589908, "raw_average_value_size": 1513, "num_data_blocks": 883, "num_entries": 5676, "num_filter_entries": 5676, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769040469, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:07:49.833228) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8728935 bytes
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:07:49.834957) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 118.5 rd, 97.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.1 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(9.0) write-amplify(4.1) OK, records in: 6201, records dropped: 525 output_compression: NoCompression
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:07:49.834977) EVENT_LOG_v1 {"time_micros": 1769040469834967, "job": 40, "event": "compaction_finished", "compaction_time_micros": 89842, "compaction_time_cpu_micros": 43562, "output_level": 6, "num_output_files": 1, "total_output_size": 8728935, "num_input_records": 6201, "num_output_records": 5676, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040469835509, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040469837103, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:07:49.742929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:07:49.837137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:07:49.837142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:07:49.837144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:07:49.837146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:07:49 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:07:49.837148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:07:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:50.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 42 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.6 KiB/s wr, 34 op/s
Jan 22 00:07:50 compute-0 ceph-mon[74318]: pgmap v1506: 305 pgs: 305 active+clean; 42 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.6 KiB/s wr, 34 op/s
Jan 22 00:07:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:51.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:52.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 42 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.7 KiB/s wr, 38 op/s
Jan 22 00:07:52 compute-0 ceph-mon[74318]: pgmap v1507: 305 pgs: 305 active+clean; 42 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.7 KiB/s wr, 38 op/s
Jan 22 00:07:53 compute-0 podman[274219]: 2026-01-22 00:07:53.029034885 +0000 UTC m=+0.130548041 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 00:07:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:53.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.2541003629257397e-05 of space, bias 1.0, pg target 0.003762301088777219 quantized to 32 (current 32)
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:07:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:07:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:54.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:07:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 57 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 721 KiB/s wr, 34 op/s
Jan 22 00:07:54 compute-0 ceph-mon[74318]: pgmap v1508: 305 pgs: 305 active+clean; 57 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 721 KiB/s wr, 34 op/s
Jan 22 00:07:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:55.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 00:07:55 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2467821069' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:07:55 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 00:07:55 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2467821069' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:07:56 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2467821069' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:07:56 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2467821069' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:07:56 compute-0 sudo[274247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:07:56 compute-0 sudo[274247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:56 compute-0 sudo[274247]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:56 compute-0 sudo[274272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:07:56 compute-0 sudo[274272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:07:56 compute-0 sudo[274272]: pam_unix(sudo:session): session closed for user root
Jan 22 00:07:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:07:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:56.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:07:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 88 MiB data, 270 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 22 00:07:57 compute-0 ceph-mon[74318]: pgmap v1509: 305 pgs: 305 active+clean; 88 MiB data, 270 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 22 00:07:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:57.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:07:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:07:58.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:07:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:07:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 88 MiB data, 270 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 22 00:07:58 compute-0 ceph-mon[74318]: pgmap v1510: 305 pgs: 305 active+clean; 88 MiB data, 270 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 22 00:07:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:07:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:07:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:07:59.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:00.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 88 MiB data, 270 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 22 00:08:00 compute-0 podman[274299]: 2026-01-22 00:08:00.977641097 +0000 UTC m=+0.088520808 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 00:08:00 compute-0 ceph-mon[74318]: pgmap v1511: 305 pgs: 305 active+clean; 88 MiB data, 270 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 22 00:08:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 00:08:01 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2438913287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:08:01 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 00:08:01 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2438913287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:08:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:01.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:02 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2438913287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:08:02 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2438913287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:08:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:02.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 72 MiB data, 263 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Jan 22 00:08:03 compute-0 ceph-mon[74318]: pgmap v1512: 305 pgs: 305 active+clean; 72 MiB data, 263 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Jan 22 00:08:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:03.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:08:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:04.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 57 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 22 00:08:04 compute-0 ceph-mon[74318]: pgmap v1513: 305 pgs: 305 active+clean; 57 MiB data, 256 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 22 00:08:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:05.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:06.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.2 MiB/s wr, 48 op/s
Jan 22 00:08:06 compute-0 ceph-mon[74318]: pgmap v1514: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.2 MiB/s wr, 48 op/s
Jan 22 00:08:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:07.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:08.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:08:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Jan 22 00:08:09 compute-0 ceph-mon[74318]: pgmap v1515: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Jan 22 00:08:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:08:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:08:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:08:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:08:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:08:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:08:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:09.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:10.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:10 compute-0 sudo[274324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:08:10 compute-0 sudo[274324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:10 compute-0 sudo[274324]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 00:08:10 compute-0 sudo[274349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:08:10 compute-0 sudo[274349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:10 compute-0 sudo[274349]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:10 compute-0 ceph-mon[74318]: pgmap v1516: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 00:08:11 compute-0 sudo[274374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:08:11 compute-0 sudo[274374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:11 compute-0 sudo[274374]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:11 compute-0 sudo[274399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 00:08:11 compute-0 sudo[274399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:11.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:11 compute-0 sudo[274399]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 22 00:08:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 00:08:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:08:11 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:08:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:08:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:08:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:08:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:08:11 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 9e0bfaf4-1ce9-4441-a261-270bd0c4fffd does not exist
Jan 22 00:08:11 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev bfab5692-baba-454e-8243-baf10feed8d3 does not exist
Jan 22 00:08:11 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev b706404d-4bd2-4947-b096-cbfb761276cd does not exist
Jan 22 00:08:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:08:11 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:08:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:08:11 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:08:11 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:08:11 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:08:11 compute-0 sudo[274455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:08:11 compute-0 sudo[274455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:11 compute-0 sudo[274455]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:11 compute-0 sudo[274480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:08:11 compute-0 sudo[274480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:11 compute-0 sudo[274480]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:11 compute-0 sudo[274505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:08:11 compute-0 sudo[274505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:11 compute-0 sudo[274505]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 00:08:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:08:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:08:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:08:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:08:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:08:12 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:08:12 compute-0 sudo[274530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:08:12 compute-0 sudo[274530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:12 compute-0 podman[274597]: 2026-01-22 00:08:12.445889912 +0000 UTC m=+0.067338395 container create 3d67a1d6fc7f0b67eef7f1bdc462fb0c9aad2e2258baa7887beca30bde38d396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 00:08:12 compute-0 systemd[1]: Started libpod-conmon-3d67a1d6fc7f0b67eef7f1bdc462fb0c9aad2e2258baa7887beca30bde38d396.scope.
Jan 22 00:08:12 compute-0 podman[274597]: 2026-01-22 00:08:12.420467749 +0000 UTC m=+0.041916282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:08:12 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:08:12 compute-0 podman[274597]: 2026-01-22 00:08:12.547086438 +0000 UTC m=+0.168534941 container init 3d67a1d6fc7f0b67eef7f1bdc462fb0c9aad2e2258baa7887beca30bde38d396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 00:08:12 compute-0 podman[274597]: 2026-01-22 00:08:12.553267009 +0000 UTC m=+0.174715462 container start 3d67a1d6fc7f0b67eef7f1bdc462fb0c9aad2e2258baa7887beca30bde38d396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 00:08:12 compute-0 podman[274597]: 2026-01-22 00:08:12.557307593 +0000 UTC m=+0.178756076 container attach 3d67a1d6fc7f0b67eef7f1bdc462fb0c9aad2e2258baa7887beca30bde38d396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:08:12 compute-0 interesting_carson[274613]: 167 167
Jan 22 00:08:12 compute-0 systemd[1]: libpod-3d67a1d6fc7f0b67eef7f1bdc462fb0c9aad2e2258baa7887beca30bde38d396.scope: Deactivated successfully.
Jan 22 00:08:12 compute-0 conmon[274613]: conmon 3d67a1d6fc7f0b67eef7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3d67a1d6fc7f0b67eef7f1bdc462fb0c9aad2e2258baa7887beca30bde38d396.scope/container/memory.events
Jan 22 00:08:12 compute-0 podman[274597]: 2026-01-22 00:08:12.562848283 +0000 UTC m=+0.184296726 container died 3d67a1d6fc7f0b67eef7f1bdc462fb0c9aad2e2258baa7887beca30bde38d396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:08:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffa0eaa5064f4c1be9ffb368cf584d2cd2405443b76009d72731f36ad71e558a-merged.mount: Deactivated successfully.
Jan 22 00:08:12 compute-0 podman[274597]: 2026-01-22 00:08:12.60103615 +0000 UTC m=+0.222484613 container remove 3d67a1d6fc7f0b67eef7f1bdc462fb0c9aad2e2258baa7887beca30bde38d396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:08:12 compute-0 systemd[1]: libpod-conmon-3d67a1d6fc7f0b67eef7f1bdc462fb0c9aad2e2258baa7887beca30bde38d396.scope: Deactivated successfully.
Jan 22 00:08:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:12.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:12 compute-0 podman[274638]: 2026-01-22 00:08:12.798409017 +0000 UTC m=+0.042757368 container create d7107feba5bf0878e9c65f8f8b3a3812da23274a7b4cb58fca2ab74ab87c401a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Jan 22 00:08:12 compute-0 systemd[1]: Started libpod-conmon-d7107feba5bf0878e9c65f8f8b3a3812da23274a7b4cb58fca2ab74ab87c401a.scope.
Jan 22 00:08:12 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2d07c4c17d0f482ae3b34958eae58f6cf70462b12a0e30ebd4b5fe015a60a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2d07c4c17d0f482ae3b34958eae58f6cf70462b12a0e30ebd4b5fe015a60a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2d07c4c17d0f482ae3b34958eae58f6cf70462b12a0e30ebd4b5fe015a60a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2d07c4c17d0f482ae3b34958eae58f6cf70462b12a0e30ebd4b5fe015a60a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2d07c4c17d0f482ae3b34958eae58f6cf70462b12a0e30ebd4b5fe015a60a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:08:12 compute-0 podman[274638]: 2026-01-22 00:08:12.783144768 +0000 UTC m=+0.027493139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:08:12 compute-0 podman[274638]: 2026-01-22 00:08:12.882341782 +0000 UTC m=+0.126690153 container init d7107feba5bf0878e9c65f8f8b3a3812da23274a7b4cb58fca2ab74ab87c401a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 00:08:12 compute-0 podman[274638]: 2026-01-22 00:08:12.889773091 +0000 UTC m=+0.134121442 container start d7107feba5bf0878e9c65f8f8b3a3812da23274a7b4cb58fca2ab74ab87c401a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:08:12 compute-0 podman[274638]: 2026-01-22 00:08:12.893583618 +0000 UTC m=+0.137931969 container attach d7107feba5bf0878e9c65f8f8b3a3812da23274a7b4cb58fca2ab74ab87c401a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 00:08:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 597 B/s wr, 22 op/s
Jan 22 00:08:13 compute-0 ceph-mon[74318]: pgmap v1517: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 597 B/s wr, 22 op/s
Jan 22 00:08:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:13.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:08:13 compute-0 lucid_torvalds[274654]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:08:13 compute-0 lucid_torvalds[274654]: --> relative data size: 1.0
Jan 22 00:08:13 compute-0 lucid_torvalds[274654]: --> All data devices are unavailable
Jan 22 00:08:13 compute-0 systemd[1]: libpod-d7107feba5bf0878e9c65f8f8b3a3812da23274a7b4cb58fca2ab74ab87c401a.scope: Deactivated successfully.
Jan 22 00:08:13 compute-0 podman[274638]: 2026-01-22 00:08:13.759503974 +0000 UTC m=+1.003852325 container died d7107feba5bf0878e9c65f8f8b3a3812da23274a7b4cb58fca2ab74ab87c401a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:08:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-da2d07c4c17d0f482ae3b34958eae58f6cf70462b12a0e30ebd4b5fe015a60a3-merged.mount: Deactivated successfully.
Jan 22 00:08:13 compute-0 podman[274638]: 2026-01-22 00:08:13.816650724 +0000 UTC m=+1.060999105 container remove d7107feba5bf0878e9c65f8f8b3a3812da23274a7b4cb58fca2ab74ab87c401a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 00:08:13 compute-0 systemd[1]: libpod-conmon-d7107feba5bf0878e9c65f8f8b3a3812da23274a7b4cb58fca2ab74ab87c401a.scope: Deactivated successfully.
Jan 22 00:08:13 compute-0 sudo[274530]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:13 compute-0 sudo[274680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:08:13 compute-0 sudo[274680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:13 compute-0 sudo[274680]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:14 compute-0 sudo[274705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:08:14 compute-0 sudo[274705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:14 compute-0 sudo[274705]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:14 compute-0 sudo[274730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:08:14 compute-0 sudo[274730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:14 compute-0 sudo[274730]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:14 compute-0 sudo[274755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:08:14 compute-0 sudo[274755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:14.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:14 compute-0 podman[274822]: 2026-01-22 00:08:14.663820942 +0000 UTC m=+0.059511384 container create e559217bc79d5196d39782110f2f55dbe94748d955a3507eddcb7a121001f071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chandrasekhar, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:08:14 compute-0 systemd[1]: Started libpod-conmon-e559217bc79d5196d39782110f2f55dbe94748d955a3507eddcb7a121001f071.scope.
Jan 22 00:08:14 compute-0 podman[274822]: 2026-01-22 00:08:14.637768149 +0000 UTC m=+0.033458601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:08:14 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:08:14 compute-0 podman[274822]: 2026-01-22 00:08:14.765137712 +0000 UTC m=+0.160828144 container init e559217bc79d5196d39782110f2f55dbe94748d955a3507eddcb7a121001f071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:08:14 compute-0 podman[274822]: 2026-01-22 00:08:14.777515532 +0000 UTC m=+0.173205954 container start e559217bc79d5196d39782110f2f55dbe94748d955a3507eddcb7a121001f071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chandrasekhar, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 00:08:14 compute-0 nifty_chandrasekhar[274839]: 167 167
Jan 22 00:08:14 compute-0 systemd[1]: libpod-e559217bc79d5196d39782110f2f55dbe94748d955a3507eddcb7a121001f071.scope: Deactivated successfully.
Jan 22 00:08:14 compute-0 podman[274822]: 2026-01-22 00:08:14.787506471 +0000 UTC m=+0.183196903 container attach e559217bc79d5196d39782110f2f55dbe94748d955a3507eddcb7a121001f071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chandrasekhar, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:08:14 compute-0 podman[274822]: 2026-01-22 00:08:14.78912143 +0000 UTC m=+0.184811852 container died e559217bc79d5196d39782110f2f55dbe94748d955a3507eddcb7a121001f071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:08:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-552bb0753d09c93ac91e7e84f4a61da349c78d4d2ad394c7adbf21ed8b6677e0-merged.mount: Deactivated successfully.
Jan 22 00:08:14 compute-0 podman[274822]: 2026-01-22 00:08:14.827858903 +0000 UTC m=+0.223549305 container remove e559217bc79d5196d39782110f2f55dbe94748d955a3507eddcb7a121001f071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 00:08:14 compute-0 systemd[1]: libpod-conmon-e559217bc79d5196d39782110f2f55dbe94748d955a3507eddcb7a121001f071.scope: Deactivated successfully.
Jan 22 00:08:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 938 B/s wr, 13 op/s
Jan 22 00:08:15 compute-0 ceph-mon[74318]: pgmap v1518: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 938 B/s wr, 13 op/s
Jan 22 00:08:15 compute-0 podman[274863]: 2026-01-22 00:08:15.026435368 +0000 UTC m=+0.052975303 container create 8e2bf509053cb32e4d8adf1a4bcb6ef5f2f593028f982e93bdb096dca07a5434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_haslett, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 00:08:15 compute-0 systemd[1]: Started libpod-conmon-8e2bf509053cb32e4d8adf1a4bcb6ef5f2f593028f982e93bdb096dca07a5434.scope.
Jan 22 00:08:15 compute-0 podman[274863]: 2026-01-22 00:08:15.002083588 +0000 UTC m=+0.028623563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:08:15 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60137f017d3c491c83d1023d616c7f2095e04c7dbbff960119b42f33763ab629/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60137f017d3c491c83d1023d616c7f2095e04c7dbbff960119b42f33763ab629/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60137f017d3c491c83d1023d616c7f2095e04c7dbbff960119b42f33763ab629/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60137f017d3c491c83d1023d616c7f2095e04c7dbbff960119b42f33763ab629/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:08:15 compute-0 podman[274863]: 2026-01-22 00:08:15.117288256 +0000 UTC m=+0.143828231 container init 8e2bf509053cb32e4d8adf1a4bcb6ef5f2f593028f982e93bdb096dca07a5434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_haslett, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 00:08:15 compute-0 podman[274863]: 2026-01-22 00:08:15.123953431 +0000 UTC m=+0.150493376 container start 8e2bf509053cb32e4d8adf1a4bcb6ef5f2f593028f982e93bdb096dca07a5434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_haslett, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:08:15 compute-0 podman[274863]: 2026-01-22 00:08:15.127789499 +0000 UTC m=+0.154329444 container attach 8e2bf509053cb32e4d8adf1a4bcb6ef5f2f593028f982e93bdb096dca07a5434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 22 00:08:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:15.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:15 compute-0 goofy_haslett[274880]: {
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:     "1": [
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:         {
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:             "devices": [
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:                 "/dev/loop3"
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:             ],
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:             "lv_name": "ceph_lv0",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:             "lv_size": "7511998464",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:             "name": "ceph_lv0",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:             "tags": {
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:                 "ceph.cluster_name": "ceph",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:                 "ceph.crush_device_class": "",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:                 "ceph.encrypted": "0",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:                 "ceph.osd_id": "1",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:                 "ceph.type": "block",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:                 "ceph.vdo": "0"
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:             },
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:             "type": "block",
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:             "vg_name": "ceph_vg0"
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:         }
Jan 22 00:08:15 compute-0 goofy_haslett[274880]:     ]
Jan 22 00:08:15 compute-0 goofy_haslett[274880]: }
Jan 22 00:08:15 compute-0 systemd[1]: libpod-8e2bf509053cb32e4d8adf1a4bcb6ef5f2f593028f982e93bdb096dca07a5434.scope: Deactivated successfully.
Jan 22 00:08:15 compute-0 podman[274863]: 2026-01-22 00:08:15.930463386 +0000 UTC m=+0.957003351 container died 8e2bf509053cb32e4d8adf1a4bcb6ef5f2f593028f982e93bdb096dca07a5434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_haslett, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 00:08:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-60137f017d3c491c83d1023d616c7f2095e04c7dbbff960119b42f33763ab629-merged.mount: Deactivated successfully.
Jan 22 00:08:16 compute-0 podman[274863]: 2026-01-22 00:08:16.012998488 +0000 UTC m=+1.039538423 container remove 8e2bf509053cb32e4d8adf1a4bcb6ef5f2f593028f982e93bdb096dca07a5434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_haslett, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:08:16 compute-0 systemd[1]: libpod-conmon-8e2bf509053cb32e4d8adf1a4bcb6ef5f2f593028f982e93bdb096dca07a5434.scope: Deactivated successfully.
Jan 22 00:08:16 compute-0 sudo[274755]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:16 compute-0 sudo[274905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:08:16 compute-0 sudo[274905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:16 compute-0 sudo[274905]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:16 compute-0 sudo[274930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:08:16 compute-0 sudo[274930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:16 compute-0 sudo[274930]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:16 compute-0 sudo[274955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:08:16 compute-0 sudo[274955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:16 compute-0 sudo[274955]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:16 compute-0 sudo[274980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:08:16 compute-0 sudo[274980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:16 compute-0 sudo[275005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:08:16 compute-0 sudo[275005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:16 compute-0 sudo[275005]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:16 compute-0 sudo[275032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:08:16 compute-0 sudo[275032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:16 compute-0 sudo[275032]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:16.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:16 compute-0 podman[275096]: 2026-01-22 00:08:16.742020459 +0000 UTC m=+0.047142124 container create bfbf852cc30be78b14b29120df1d146d7a884485cad4dbe69dbce3f4dd3d533a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:08:16 compute-0 systemd[1]: Started libpod-conmon-bfbf852cc30be78b14b29120df1d146d7a884485cad4dbe69dbce3f4dd3d533a.scope.
Jan 22 00:08:16 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:08:16 compute-0 podman[275096]: 2026-01-22 00:08:16.720145294 +0000 UTC m=+0.025266989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:08:16 compute-0 podman[275096]: 2026-01-22 00:08:16.821380182 +0000 UTC m=+0.126501887 container init bfbf852cc30be78b14b29120df1d146d7a884485cad4dbe69dbce3f4dd3d533a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jackson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:08:16 compute-0 podman[275096]: 2026-01-22 00:08:16.82975991 +0000 UTC m=+0.134881615 container start bfbf852cc30be78b14b29120df1d146d7a884485cad4dbe69dbce3f4dd3d533a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jackson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 00:08:16 compute-0 podman[275096]: 2026-01-22 00:08:16.834116424 +0000 UTC m=+0.139238239 container attach bfbf852cc30be78b14b29120df1d146d7a884485cad4dbe69dbce3f4dd3d533a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jackson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 00:08:16 compute-0 pedantic_jackson[275112]: 167 167
Jan 22 00:08:16 compute-0 systemd[1]: libpod-bfbf852cc30be78b14b29120df1d146d7a884485cad4dbe69dbce3f4dd3d533a.scope: Deactivated successfully.
Jan 22 00:08:16 compute-0 podman[275096]: 2026-01-22 00:08:16.83625995 +0000 UTC m=+0.141381655 container died bfbf852cc30be78b14b29120df1d146d7a884485cad4dbe69dbce3f4dd3d533a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jackson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 00:08:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-46b90a3e0be689e3936ae6031bac9ef0d2d26b630796ff2e941e1f316c4bddf0-merged.mount: Deactivated successfully.
Jan 22 00:08:16 compute-0 podman[275096]: 2026-01-22 00:08:16.881618237 +0000 UTC m=+0.186739912 container remove bfbf852cc30be78b14b29120df1d146d7a884485cad4dbe69dbce3f4dd3d533a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:08:16 compute-0 systemd[1]: libpod-conmon-bfbf852cc30be78b14b29120df1d146d7a884485cad4dbe69dbce3f4dd3d533a.scope: Deactivated successfully.
Jan 22 00:08:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 57 MiB data, 253 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 353 KiB/s wr, 25 op/s
Jan 22 00:08:17 compute-0 ceph-mon[74318]: pgmap v1519: 305 pgs: 305 active+clean; 57 MiB data, 253 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 353 KiB/s wr, 25 op/s
Jan 22 00:08:17 compute-0 podman[275136]: 2026-01-22 00:08:17.064756326 +0000 UTC m=+0.054571081 container create ecef37ba03827fd75436e4151340a1b4fdfb842c21ca4b067265a6392ea05a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 00:08:17 compute-0 systemd[1]: Started libpod-conmon-ecef37ba03827fd75436e4151340a1b4fdfb842c21ca4b067265a6392ea05a60.scope.
Jan 22 00:08:17 compute-0 podman[275136]: 2026-01-22 00:08:17.046173494 +0000 UTC m=+0.035988239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:08:17 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0570f97d109b0fee7bfebccc4a07beb8e48ede660093ba2d371f43d99b405be5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0570f97d109b0fee7bfebccc4a07beb8e48ede660093ba2d371f43d99b405be5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0570f97d109b0fee7bfebccc4a07beb8e48ede660093ba2d371f43d99b405be5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0570f97d109b0fee7bfebccc4a07beb8e48ede660093ba2d371f43d99b405be5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:08:17 compute-0 podman[275136]: 2026-01-22 00:08:17.179148759 +0000 UTC m=+0.168963524 container init ecef37ba03827fd75436e4151340a1b4fdfb842c21ca4b067265a6392ea05a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_raman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:08:17 compute-0 podman[275136]: 2026-01-22 00:08:17.194168492 +0000 UTC m=+0.183983247 container start ecef37ba03827fd75436e4151340a1b4fdfb842c21ca4b067265a6392ea05a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_raman, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 00:08:17 compute-0 podman[275136]: 2026-01-22 00:08:17.197933128 +0000 UTC m=+0.187747883 container attach ecef37ba03827fd75436e4151340a1b4fdfb842c21ca4b067265a6392ea05a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_raman, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 00:08:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:17.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:18 compute-0 priceless_raman[275152]: {
Jan 22 00:08:18 compute-0 priceless_raman[275152]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:08:18 compute-0 priceless_raman[275152]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:08:18 compute-0 priceless_raman[275152]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:08:18 compute-0 priceless_raman[275152]:         "osd_id": 1,
Jan 22 00:08:18 compute-0 priceless_raman[275152]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:08:18 compute-0 priceless_raman[275152]:         "type": "bluestore"
Jan 22 00:08:18 compute-0 priceless_raman[275152]:     }
Jan 22 00:08:18 compute-0 priceless_raman[275152]: }
Jan 22 00:08:18 compute-0 systemd[1]: libpod-ecef37ba03827fd75436e4151340a1b4fdfb842c21ca4b067265a6392ea05a60.scope: Deactivated successfully.
Jan 22 00:08:18 compute-0 podman[275136]: 2026-01-22 00:08:18.101985787 +0000 UTC m=+1.091800512 container died ecef37ba03827fd75436e4151340a1b4fdfb842c21ca4b067265a6392ea05a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 00:08:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0570f97d109b0fee7bfebccc4a07beb8e48ede660093ba2d371f43d99b405be5-merged.mount: Deactivated successfully.
Jan 22 00:08:18 compute-0 podman[275136]: 2026-01-22 00:08:18.180309729 +0000 UTC m=+1.170124444 container remove ecef37ba03827fd75436e4151340a1b4fdfb842c21ca4b067265a6392ea05a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 00:08:18 compute-0 systemd[1]: libpod-conmon-ecef37ba03827fd75436e4151340a1b4fdfb842c21ca4b067265a6392ea05a60.scope: Deactivated successfully.
Jan 22 00:08:18 compute-0 sudo[274980]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:08:18 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:08:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:08:18 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:08:18 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev e6131491-cc65-4011-a7ef-d65b71410584 does not exist
Jan 22 00:08:18 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev cd9b7137-73ee-4fb8-976c-a9a000b29b6f does not exist
Jan 22 00:08:18 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 1d268385-5957-4fdd-944f-4d0ff515c913 does not exist
Jan 22 00:08:18 compute-0 sudo[275189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:08:18 compute-0 sudo[275189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:18 compute-0 sudo[275189]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:18 compute-0 sudo[275214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:08:18 compute-0 sudo[275214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:18 compute-0 sudo[275214]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:08:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:18.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:08:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:08:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 57 MiB data, 253 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 352 KiB/s wr, 21 op/s
Jan 22 00:08:19 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:08:19 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:08:19 compute-0 ceph-mon[74318]: pgmap v1520: 305 pgs: 305 active+clean; 57 MiB data, 253 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 352 KiB/s wr, 21 op/s
Jan 22 00:08:19 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1654514682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:08:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:19.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:20 compute-0 nova_compute[247516]: 2026-01-22 00:08:19.996 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:08:20 compute-0 nova_compute[247516]: 2026-01-22 00:08:20.000 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:08:20 compute-0 nova_compute[247516]: 2026-01-22 00:08:20.000 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:08:20 compute-0 nova_compute[247516]: 2026-01-22 00:08:20.029 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:08:20 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2374862042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:08:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:20.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 88 MiB data, 270 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 22 00:08:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:21.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:21 compute-0 ceph-mon[74318]: pgmap v1521: 305 pgs: 305 active+clean; 88 MiB data, 270 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 22 00:08:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 00:08:22 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2717793404' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:08:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 00:08:22 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2717793404' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:08:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:22.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2717793404' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:08:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2717793404' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:08:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 88 MiB data, 270 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 22 00:08:22 compute-0 nova_compute[247516]: 2026-01-22 00:08:22.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:08:22 compute-0 nova_compute[247516]: 2026-01-22 00:08:22.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:08:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:23.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:08:23 compute-0 ceph-mon[74318]: pgmap v1522: 305 pgs: 305 active+clean; 88 MiB data, 270 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 22 00:08:24 compute-0 podman[275242]: 2026-01-22 00:08:24.033771861 +0000 UTC m=+0.129601732 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 00:08:24 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:08:24.091 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 00:08:24 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:08:24.095 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 00:08:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:24.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:24 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3471491564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:08:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 72 MiB data, 262 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 22 00:08:24 compute-0 nova_compute[247516]: 2026-01-22 00:08:24.988 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:08:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:25.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:25 compute-0 ceph-mon[74318]: pgmap v1523: 305 pgs: 305 active+clean; 72 MiB data, 262 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 22 00:08:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1015820932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:08:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3082309102' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:08:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3082309102' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:08:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:26.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Jan 22 00:08:27 compute-0 ceph-mon[74318]: pgmap v1524: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Jan 22 00:08:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:27.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:27 compute-0 nova_compute[247516]: 2026-01-22 00:08:27.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:08:28 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:08:28.098 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 00:08:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:28.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:08:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 MiB/s wr, 35 op/s
Jan 22 00:08:28 compute-0 nova_compute[247516]: 2026-01-22 00:08:28.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:08:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:29.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:29 compute-0 ceph-mon[74318]: pgmap v1525: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 MiB/s wr, 35 op/s
Jan 22 00:08:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:30.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 MiB/s wr, 35 op/s
Jan 22 00:08:30 compute-0 nova_compute[247516]: 2026-01-22 00:08:30.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:08:31 compute-0 ceph-mon[74318]: pgmap v1526: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 MiB/s wr, 35 op/s
Jan 22 00:08:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:31.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:31 compute-0 podman[275272]: 2026-01-22 00:08:31.974150577 +0000 UTC m=+0.078511655 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 00:08:31 compute-0 nova_compute[247516]: 2026-01-22 00:08:31.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:08:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:08:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:32.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:08:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 00:08:33 compute-0 ceph-mon[74318]: pgmap v1527: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 00:08:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:33.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:08:33 compute-0 nova_compute[247516]: 2026-01-22 00:08:33.994 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:08:34 compute-0 nova_compute[247516]: 2026-01-22 00:08:34.019 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:08:34 compute-0 nova_compute[247516]: 2026-01-22 00:08:34.019 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:08:34 compute-0 nova_compute[247516]: 2026-01-22 00:08:34.020 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:08:34 compute-0 nova_compute[247516]: 2026-01-22 00:08:34.020 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:08:34 compute-0 nova_compute[247516]: 2026-01-22 00:08:34.021 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:08:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:08:34 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3838614738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:08:34 compute-0 nova_compute[247516]: 2026-01-22 00:08:34.489 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:08:34 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3838614738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:08:34 compute-0 nova_compute[247516]: 2026-01-22 00:08:34.669 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:08:34 compute-0 nova_compute[247516]: 2026-01-22 00:08:34.670 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5118MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:08:34 compute-0 nova_compute[247516]: 2026-01-22 00:08:34.670 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:08:34 compute-0 nova_compute[247516]: 2026-01-22 00:08:34.670 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:08:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:34.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:34 compute-0 nova_compute[247516]: 2026-01-22 00:08:34.745 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:08:34 compute-0 nova_compute[247516]: 2026-01-22 00:08:34.746 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:08:34 compute-0 nova_compute[247516]: 2026-01-22 00:08:34.746 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:08:34 compute-0 nova_compute[247516]: 2026-01-22 00:08:34.789 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:08:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 00:08:35 compute-0 nova_compute[247516]: 2026-01-22 00:08:35.211 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:08:35 compute-0 nova_compute[247516]: 2026-01-22 00:08:35.217 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:08:35 compute-0 nova_compute[247516]: 2026-01-22 00:08:35.234 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:08:35 compute-0 nova_compute[247516]: 2026-01-22 00:08:35.235 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:08:35 compute-0 nova_compute[247516]: 2026-01-22 00:08:35.236 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:08:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:35.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:35 compute-0 ceph-mon[74318]: pgmap v1528: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 00:08:35 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/119771493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:08:36 compute-0 sudo[275338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:08:36 compute-0 sudo[275338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:36 compute-0 sudo[275338]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:36.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:36 compute-0 sudo[275363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:08:36 compute-0 sudo[275363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:36 compute-0 sudo[275363]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 341 B/s wr, 4 op/s
Jan 22 00:08:37 compute-0 ceph-mon[74318]: pgmap v1529: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 341 B/s wr, 4 op/s
Jan 22 00:08:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:37.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:38 compute-0 nova_compute[247516]: 2026-01-22 00:08:38.234 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:08:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:08:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:08:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:38.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:08:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:39 compute-0 ceph-mon[74318]: pgmap v1530: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:08:39
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['.mgr', 'volumes', 'images', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control']
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:08:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:39.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:08:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:08:40 compute-0 ceph-mgr[74614]: client.0 ms_handle_reset on v2:192.168.122.100:6800/934453051
Jan 22 00:08:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:40.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:41 compute-0 ceph-mon[74318]: pgmap v1531: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:41.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:42.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:43 compute-0 ceph-mon[74318]: pgmap v1532: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:43.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:08:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:44.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:45 compute-0 ceph-mon[74318]: pgmap v1533: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:45.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:46.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:47 compute-0 ceph-mon[74318]: pgmap v1534: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:08:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:47.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:08:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:08:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:48.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:08:48.769 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:08:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:08:48.770 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:08:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:08:48.770 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:08:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:49 compute-0 ceph-mon[74318]: pgmap v1535: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:49.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:50.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:51 compute-0 ceph-mon[74318]: pgmap v1536: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:51.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:52.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:53 compute-0 ceph-mon[74318]: pgmap v1537: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:53.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:08:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:54.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:55 compute-0 podman[275397]: 2026-01-22 00:08:55.003733906 +0000 UTC m=+0.116197234 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 00:08:55 compute-0 ceph-mon[74318]: pgmap v1538: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:55.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:56.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:08:56 compute-0 sudo[275424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:08:56 compute-0 sudo[275424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:56 compute-0 sudo[275424]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:56 compute-0 sudo[275449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:08:56 compute-0 sudo[275449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:08:56 compute-0 sudo[275449]: pam_unix(sudo:session): session closed for user root
Jan 22 00:08:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:57 compute-0 ceph-mon[74318]: pgmap v1539: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:57.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:08:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:08:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:08:58.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:08:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:59 compute-0 ceph-mon[74318]: pgmap v1540: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:08:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:08:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:08:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:08:59.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:00.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:01 compute-0 ceph-mon[74318]: pgmap v1541: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:01.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:02.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:02 compute-0 podman[275477]: 2026-01-22 00:09:02.96235095 +0000 UTC m=+0.070741424 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 00:09:03 compute-0 ceph-mon[74318]: pgmap v1542: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:09:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:03.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:09:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:09:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:04.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:05 compute-0 ceph-mon[74318]: pgmap v1543: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:05.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:06.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:07 compute-0 ceph-mon[74318]: pgmap v1544: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:07.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:09:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:09:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:08.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:09:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:09:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:09:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:09:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:09:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:09:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:09:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:09.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:10 compute-0 ceph-mon[74318]: pgmap v1545: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:10.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:11 compute-0 ceph-mon[74318]: pgmap v1546: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:11.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:09:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:12.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:09:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:13.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:09:13 compute-0 ceph-mon[74318]: pgmap v1547: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:14.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:09:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:15.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:09:15 compute-0 ceph-mon[74318]: pgmap v1548: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:09:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:16.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:09:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:17 compute-0 sudo[275505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:09:17 compute-0 sudo[275505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:17 compute-0 sudo[275505]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:17 compute-0 sudo[275530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:09:17 compute-0 sudo[275530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:17 compute-0 sudo[275530]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:17 compute-0 ceph-mon[74318]: pgmap v1549: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:17.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:09:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:18.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:18 compute-0 sudo[275556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:09:18 compute-0 sudo[275556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:18 compute-0 sudo[275556]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:18 compute-0 sudo[275581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:09:18 compute-0 sudo[275581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:18 compute-0 sudo[275581]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:19 compute-0 sudo[275606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:09:19 compute-0 sudo[275606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:19 compute-0 sudo[275606]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:19 compute-0 sudo[275631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 00:09:19 compute-0 sudo[275631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:09:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:19.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:09:19 compute-0 sudo[275631]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:09:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:09:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:09:19 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:09:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:09:19 compute-0 nova_compute[247516]: 2026-01-22 00:09:19.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:09:19 compute-0 nova_compute[247516]: 2026-01-22 00:09:19.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:09:19 compute-0 nova_compute[247516]: 2026-01-22 00:09:19.994 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:09:20 compute-0 nova_compute[247516]: 2026-01-22 00:09:20.020 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:09:20 compute-0 ceph-mon[74318]: pgmap v1550: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:09:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:20.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:09:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:21 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:09:21 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 0517c73b-6d67-4873-a0d9-5cf8116aeba3 does not exist
Jan 22 00:09:21 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 1176acb6-da5f-4d44-a296-ebcbc42d4bf7 does not exist
Jan 22 00:09:21 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 787b2a26-8d4d-4805-af4c-99951b5d40cd does not exist
Jan 22 00:09:21 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:09:21 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:09:21 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:09:21 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:09:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:09:21 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:09:21 compute-0 ceph-mon[74318]: pgmap v1551: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:21 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:09:21 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:09:21 compute-0 sudo[275688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:09:21 compute-0 sudo[275688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:21 compute-0 sudo[275688]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:21 compute-0 sudo[275713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:09:21 compute-0 sudo[275713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:21 compute-0 sudo[275713]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:21 compute-0 sudo[275738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:09:21 compute-0 sudo[275738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:21.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:21 compute-0 sudo[275738]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:21 compute-0 sudo[275764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:09:21 compute-0 sudo[275764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:21 compute-0 podman[275828]: 2026-01-22 00:09:21.945535376 +0000 UTC m=+0.064157630 container create 9781b752ca1cda646ef734e6990bcc0ff34b6c78c68c1c23d550162b1b103913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 00:09:21 compute-0 systemd[1]: Started libpod-conmon-9781b752ca1cda646ef734e6990bcc0ff34b6c78c68c1c23d550162b1b103913.scope.
Jan 22 00:09:22 compute-0 podman[275828]: 2026-01-22 00:09:21.92051197 +0000 UTC m=+0.039134274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:09:22 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:09:22 compute-0 podman[275828]: 2026-01-22 00:09:22.051865982 +0000 UTC m=+0.170488256 container init 9781b752ca1cda646ef734e6990bcc0ff34b6c78c68c1c23d550162b1b103913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:09:22 compute-0 podman[275828]: 2026-01-22 00:09:22.067314942 +0000 UTC m=+0.185937216 container start 9781b752ca1cda646ef734e6990bcc0ff34b6c78c68c1c23d550162b1b103913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:09:22 compute-0 podman[275828]: 2026-01-22 00:09:22.071739528 +0000 UTC m=+0.190361802 container attach 9781b752ca1cda646ef734e6990bcc0ff34b6c78c68c1c23d550162b1b103913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Jan 22 00:09:22 compute-0 magical_chaplygin[275844]: 167 167
Jan 22 00:09:22 compute-0 systemd[1]: libpod-9781b752ca1cda646ef734e6990bcc0ff34b6c78c68c1c23d550162b1b103913.scope: Deactivated successfully.
Jan 22 00:09:22 compute-0 podman[275828]: 2026-01-22 00:09:22.075892207 +0000 UTC m=+0.194514471 container died 9781b752ca1cda646ef734e6990bcc0ff34b6c78c68c1c23d550162b1b103913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:09:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-800cbfcb8f8982ac7ef41d0cc84f7a7fcdcf7d28e20ef61566e7cc575f46a213-merged.mount: Deactivated successfully.
Jan 22 00:09:22 compute-0 podman[275828]: 2026-01-22 00:09:22.129235521 +0000 UTC m=+0.247857785 container remove 9781b752ca1cda646ef734e6990bcc0ff34b6c78c68c1c23d550162b1b103913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chaplygin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 00:09:22 compute-0 systemd[1]: libpod-conmon-9781b752ca1cda646ef734e6990bcc0ff34b6c78c68c1c23d550162b1b103913.scope: Deactivated successfully.
Jan 22 00:09:22 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:09:22 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:09:22 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:09:22 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:09:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/785732584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:09:22 compute-0 podman[275867]: 2026-01-22 00:09:22.29691118 +0000 UTC m=+0.048043601 container create 1143e3c8b425ec2c5e236be5fdc39d2f6cb70515e0f897f13b90632c9fc458b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:09:22 compute-0 systemd[1]: Started libpod-conmon-1143e3c8b425ec2c5e236be5fdc39d2f6cb70515e0f897f13b90632c9fc458b5.scope.
Jan 22 00:09:22 compute-0 podman[275867]: 2026-01-22 00:09:22.277784586 +0000 UTC m=+0.028917027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:09:22 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c851bff424ab32fc57b957cfb2a3108bd2ea2931ea9a5c64bb0e2f4bab3b2d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c851bff424ab32fc57b957cfb2a3108bd2ea2931ea9a5c64bb0e2f4bab3b2d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c851bff424ab32fc57b957cfb2a3108bd2ea2931ea9a5c64bb0e2f4bab3b2d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c851bff424ab32fc57b957cfb2a3108bd2ea2931ea9a5c64bb0e2f4bab3b2d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c851bff424ab32fc57b957cfb2a3108bd2ea2931ea9a5c64bb0e2f4bab3b2d6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:09:22 compute-0 podman[275867]: 2026-01-22 00:09:22.404356861 +0000 UTC m=+0.155489292 container init 1143e3c8b425ec2c5e236be5fdc39d2f6cb70515e0f897f13b90632c9fc458b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Jan 22 00:09:22 compute-0 podman[275867]: 2026-01-22 00:09:22.412284536 +0000 UTC m=+0.163416957 container start 1143e3c8b425ec2c5e236be5fdc39d2f6cb70515e0f897f13b90632c9fc458b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 00:09:22 compute-0 podman[275867]: 2026-01-22 00:09:22.415521447 +0000 UTC m=+0.166653878 container attach 1143e3c8b425ec2c5e236be5fdc39d2f6cb70515e0f897f13b90632c9fc458b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_raman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:09:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:09:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:22.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:09:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1839492206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:09:23 compute-0 ceph-mon[74318]: pgmap v1552: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:23 compute-0 musing_raman[275883]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:09:23 compute-0 musing_raman[275883]: --> relative data size: 1.0
Jan 22 00:09:23 compute-0 musing_raman[275883]: --> All data devices are unavailable
Jan 22 00:09:23 compute-0 systemd[1]: libpod-1143e3c8b425ec2c5e236be5fdc39d2f6cb70515e0f897f13b90632c9fc458b5.scope: Deactivated successfully.
Jan 22 00:09:23 compute-0 conmon[275883]: conmon 1143e3c8b425ec2c5e23 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1143e3c8b425ec2c5e236be5fdc39d2f6cb70515e0f897f13b90632c9fc458b5.scope/container/memory.events
Jan 22 00:09:23 compute-0 podman[275867]: 2026-01-22 00:09:23.273212589 +0000 UTC m=+1.024345000 container died 1143e3c8b425ec2c5e236be5fdc39d2f6cb70515e0f897f13b90632c9fc458b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_raman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:09:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c851bff424ab32fc57b957cfb2a3108bd2ea2931ea9a5c64bb0e2f4bab3b2d6-merged.mount: Deactivated successfully.
Jan 22 00:09:23 compute-0 podman[275867]: 2026-01-22 00:09:23.333469066 +0000 UTC m=+1.084601477 container remove 1143e3c8b425ec2c5e236be5fdc39d2f6cb70515e0f897f13b90632c9fc458b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 00:09:23 compute-0 systemd[1]: libpod-conmon-1143e3c8b425ec2c5e236be5fdc39d2f6cb70515e0f897f13b90632c9fc458b5.scope: Deactivated successfully.
Jan 22 00:09:23 compute-0 sudo[275764]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:23 compute-0 sudo[275913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:09:23 compute-0 sudo[275913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:23 compute-0 sudo[275913]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:09:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:23.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:09:23 compute-0 sudo[275939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:09:23 compute-0 sudo[275939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:23 compute-0 sudo[275939]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:23 compute-0 sudo[275964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:09:23 compute-0 sudo[275964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:23 compute-0 sudo[275964]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:23 compute-0 sudo[275989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:09:23 compute-0 sudo[275989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:09:23 compute-0 nova_compute[247516]: 2026-01-22 00:09:23.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:09:23 compute-0 nova_compute[247516]: 2026-01-22 00:09:23.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:09:24 compute-0 podman[276056]: 2026-01-22 00:09:23.983785109 +0000 UTC m=+0.023250603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:09:24 compute-0 podman[276056]: 2026-01-22 00:09:24.553215753 +0000 UTC m=+0.592681237 container create 94b7199bc6e552ac29d1000536a09814ee10f13652bc07993be8d36b36d2823c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lederberg, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 22 00:09:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:24.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:24 compute-0 systemd[1]: Started libpod-conmon-94b7199bc6e552ac29d1000536a09814ee10f13652bc07993be8d36b36d2823c.scope.
Jan 22 00:09:24 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:09:24 compute-0 podman[276056]: 2026-01-22 00:09:24.829038114 +0000 UTC m=+0.868503588 container init 94b7199bc6e552ac29d1000536a09814ee10f13652bc07993be8d36b36d2823c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:09:24 compute-0 podman[276056]: 2026-01-22 00:09:24.838624872 +0000 UTC m=+0.878090336 container start 94b7199bc6e552ac29d1000536a09814ee10f13652bc07993be8d36b36d2823c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 00:09:24 compute-0 podman[276056]: 2026-01-22 00:09:24.842295965 +0000 UTC m=+0.881761459 container attach 94b7199bc6e552ac29d1000536a09814ee10f13652bc07993be8d36b36d2823c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lederberg, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 00:09:24 compute-0 modest_lederberg[276073]: 167 167
Jan 22 00:09:24 compute-0 systemd[1]: libpod-94b7199bc6e552ac29d1000536a09814ee10f13652bc07993be8d36b36d2823c.scope: Deactivated successfully.
Jan 22 00:09:24 compute-0 podman[276056]: 2026-01-22 00:09:24.845093761 +0000 UTC m=+0.884559235 container died 94b7199bc6e552ac29d1000536a09814ee10f13652bc07993be8d36b36d2823c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lederberg, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:09:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-d265e3b36d3fc33b08fdc84d1b0204ac8f7774dd234a066584b46fc0392736f5-merged.mount: Deactivated successfully.
Jan 22 00:09:24 compute-0 podman[276056]: 2026-01-22 00:09:24.878206268 +0000 UTC m=+0.917671742 container remove 94b7199bc6e552ac29d1000536a09814ee10f13652bc07993be8d36b36d2823c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lederberg, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 00:09:24 compute-0 systemd[1]: libpod-conmon-94b7199bc6e552ac29d1000536a09814ee10f13652bc07993be8d36b36d2823c.scope: Deactivated successfully.
Jan 22 00:09:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:25 compute-0 podman[276097]: 2026-01-22 00:09:25.063332118 +0000 UTC m=+0.060264379 container create f5bee996743687a1dd6d4788ac14f0531cab792f6bebff8a60f467e13b82a02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_napier, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Jan 22 00:09:25 compute-0 systemd[1]: Started libpod-conmon-f5bee996743687a1dd6d4788ac14f0531cab792f6bebff8a60f467e13b82a02c.scope.
Jan 22 00:09:25 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/018ad9bfaaec48d1d451463fc43cb647bfd0341e30bf38236973b787a32172f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/018ad9bfaaec48d1d451463fc43cb647bfd0341e30bf38236973b787a32172f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/018ad9bfaaec48d1d451463fc43cb647bfd0341e30bf38236973b787a32172f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/018ad9bfaaec48d1d451463fc43cb647bfd0341e30bf38236973b787a32172f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:09:25 compute-0 podman[276097]: 2026-01-22 00:09:25.041086898 +0000 UTC m=+0.038019189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:09:25 compute-0 podman[276097]: 2026-01-22 00:09:25.143795183 +0000 UTC m=+0.140727454 container init f5bee996743687a1dd6d4788ac14f0531cab792f6bebff8a60f467e13b82a02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_napier, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:09:25 compute-0 podman[276097]: 2026-01-22 00:09:25.159362075 +0000 UTC m=+0.156294326 container start f5bee996743687a1dd6d4788ac14f0531cab792f6bebff8a60f467e13b82a02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_napier, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:09:25 compute-0 podman[276097]: 2026-01-22 00:09:25.164209075 +0000 UTC m=+0.161141326 container attach f5bee996743687a1dd6d4788ac14f0531cab792f6bebff8a60f467e13b82a02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 22 00:09:25 compute-0 podman[276111]: 2026-01-22 00:09:25.255892638 +0000 UTC m=+0.147183154 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 22 00:09:25 compute-0 ceph-mon[74318]: pgmap v1553: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:25.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:25 compute-0 nova_compute[247516]: 2026-01-22 00:09:25.988 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:09:25 compute-0 priceless_napier[276114]: {
Jan 22 00:09:25 compute-0 priceless_napier[276114]:     "1": [
Jan 22 00:09:25 compute-0 priceless_napier[276114]:         {
Jan 22 00:09:25 compute-0 priceless_napier[276114]:             "devices": [
Jan 22 00:09:25 compute-0 priceless_napier[276114]:                 "/dev/loop3"
Jan 22 00:09:25 compute-0 priceless_napier[276114]:             ],
Jan 22 00:09:25 compute-0 priceless_napier[276114]:             "lv_name": "ceph_lv0",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:             "lv_size": "7511998464",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:             "name": "ceph_lv0",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:             "tags": {
Jan 22 00:09:25 compute-0 priceless_napier[276114]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:                 "ceph.cluster_name": "ceph",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:                 "ceph.crush_device_class": "",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:                 "ceph.encrypted": "0",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:                 "ceph.osd_id": "1",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:                 "ceph.type": "block",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:                 "ceph.vdo": "0"
Jan 22 00:09:25 compute-0 priceless_napier[276114]:             },
Jan 22 00:09:25 compute-0 priceless_napier[276114]:             "type": "block",
Jan 22 00:09:25 compute-0 priceless_napier[276114]:             "vg_name": "ceph_vg0"
Jan 22 00:09:25 compute-0 priceless_napier[276114]:         }
Jan 22 00:09:25 compute-0 priceless_napier[276114]:     ]
Jan 22 00:09:25 compute-0 priceless_napier[276114]: }
Jan 22 00:09:26 compute-0 systemd[1]: libpod-f5bee996743687a1dd6d4788ac14f0531cab792f6bebff8a60f467e13b82a02c.scope: Deactivated successfully.
Jan 22 00:09:26 compute-0 podman[276097]: 2026-01-22 00:09:26.038197812 +0000 UTC m=+1.035130073 container died f5bee996743687a1dd6d4788ac14f0531cab792f6bebff8a60f467e13b82a02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_napier, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-018ad9bfaaec48d1d451463fc43cb647bfd0341e30bf38236973b787a32172f4-merged.mount: Deactivated successfully.
Jan 22 00:09:26 compute-0 podman[276097]: 2026-01-22 00:09:26.098661447 +0000 UTC m=+1.095593698 container remove f5bee996743687a1dd6d4788ac14f0531cab792f6bebff8a60f467e13b82a02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 00:09:26 compute-0 systemd[1]: libpod-conmon-f5bee996743687a1dd6d4788ac14f0531cab792f6bebff8a60f467e13b82a02c.scope: Deactivated successfully.
Jan 22 00:09:26 compute-0 sudo[275989]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:26 compute-0 sudo[276163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:09:26 compute-0 sudo[276163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:26 compute-0 sudo[276163]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:26 compute-0 sudo[276188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:09:26 compute-0 sudo[276188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:26 compute-0 sudo[276188]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:26 compute-0 sudo[276213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:09:26 compute-0 sudo[276213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:26 compute-0 sudo[276213]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:26 compute-0 sudo[276238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:09:26 compute-0 sudo[276238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:26.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:26 compute-0 podman[276304]: 2026-01-22 00:09:26.821783596 +0000 UTC m=+0.054590693 container create 3f8d88febe072f4eef8b913ca6220ff428081c8cf36f8dae9f8ff00735d0e275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:09:26 compute-0 systemd[1]: Started libpod-conmon-3f8d88febe072f4eef8b913ca6220ff428081c8cf36f8dae9f8ff00735d0e275.scope.
Jan 22 00:09:26 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:09:26 compute-0 podman[276304]: 2026-01-22 00:09:26.800081563 +0000 UTC m=+0.032888710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:09:26 compute-0 podman[276304]: 2026-01-22 00:09:26.905636456 +0000 UTC m=+0.138443513 container init 3f8d88febe072f4eef8b913ca6220ff428081c8cf36f8dae9f8ff00735d0e275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 00:09:26 compute-0 podman[276304]: 2026-01-22 00:09:26.913966014 +0000 UTC m=+0.146773071 container start 3f8d88febe072f4eef8b913ca6220ff428081c8cf36f8dae9f8ff00735d0e275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:09:26 compute-0 podman[276304]: 2026-01-22 00:09:26.917900226 +0000 UTC m=+0.150707293 container attach 3f8d88febe072f4eef8b913ca6220ff428081c8cf36f8dae9f8ff00735d0e275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:09:26 compute-0 funny_einstein[276320]: 167 167
Jan 22 00:09:26 compute-0 systemd[1]: libpod-3f8d88febe072f4eef8b913ca6220ff428081c8cf36f8dae9f8ff00735d0e275.scope: Deactivated successfully.
Jan 22 00:09:26 compute-0 podman[276304]: 2026-01-22 00:09:26.921352303 +0000 UTC m=+0.154159360 container died 3f8d88febe072f4eef8b913ca6220ff428081c8cf36f8dae9f8ff00735d0e275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 00:09:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fed556a1662fab8afc5230699524c4800d1e05de573451c7c703ad3cd63fb22-merged.mount: Deactivated successfully.
Jan 22 00:09:26 compute-0 podman[276304]: 2026-01-22 00:09:26.970461195 +0000 UTC m=+0.203268252 container remove 3f8d88febe072f4eef8b913ca6220ff428081c8cf36f8dae9f8ff00735d0e275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:09:26 compute-0 systemd[1]: libpod-conmon-3f8d88febe072f4eef8b913ca6220ff428081c8cf36f8dae9f8ff00735d0e275.scope: Deactivated successfully.
Jan 22 00:09:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/22612511' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:09:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/22612511' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:09:27 compute-0 podman[276346]: 2026-01-22 00:09:27.150646982 +0000 UTC m=+0.045176292 container create 558f5281c7c2b55b7fb28837f854890b53452ee05c12ef90f3e1968b61cf2fe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_grothendieck, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:09:27 compute-0 systemd[1]: Started libpod-conmon-558f5281c7c2b55b7fb28837f854890b53452ee05c12ef90f3e1968b61cf2fe6.scope.
Jan 22 00:09:27 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1db9654fdcf355efc2d41560dad348bdcd62fd84d507247d24dc29259838489e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1db9654fdcf355efc2d41560dad348bdcd62fd84d507247d24dc29259838489e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1db9654fdcf355efc2d41560dad348bdcd62fd84d507247d24dc29259838489e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1db9654fdcf355efc2d41560dad348bdcd62fd84d507247d24dc29259838489e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:09:27 compute-0 podman[276346]: 2026-01-22 00:09:27.132455648 +0000 UTC m=+0.026984988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:09:27 compute-0 podman[276346]: 2026-01-22 00:09:27.240509038 +0000 UTC m=+0.135038368 container init 558f5281c7c2b55b7fb28837f854890b53452ee05c12ef90f3e1968b61cf2fe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_grothendieck, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:09:27 compute-0 podman[276346]: 2026-01-22 00:09:27.249121785 +0000 UTC m=+0.143651095 container start 558f5281c7c2b55b7fb28837f854890b53452ee05c12ef90f3e1968b61cf2fe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 00:09:27 compute-0 podman[276346]: 2026-01-22 00:09:27.252788798 +0000 UTC m=+0.147318108 container attach 558f5281c7c2b55b7fb28837f854890b53452ee05c12ef90f3e1968b61cf2fe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 00:09:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:27.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:27 compute-0 nova_compute[247516]: 2026-01-22 00:09:27.988 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:09:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1524375743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:09:28 compute-0 ceph-mon[74318]: pgmap v1554: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/4056364113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:09:28 compute-0 nice_grothendieck[276363]: {
Jan 22 00:09:28 compute-0 nice_grothendieck[276363]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:09:28 compute-0 nice_grothendieck[276363]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:09:28 compute-0 nice_grothendieck[276363]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:09:28 compute-0 nice_grothendieck[276363]:         "osd_id": 1,
Jan 22 00:09:28 compute-0 nice_grothendieck[276363]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:09:28 compute-0 nice_grothendieck[276363]:         "type": "bluestore"
Jan 22 00:09:28 compute-0 nice_grothendieck[276363]:     }
Jan 22 00:09:28 compute-0 nice_grothendieck[276363]: }
Jan 22 00:09:28 compute-0 systemd[1]: libpod-558f5281c7c2b55b7fb28837f854890b53452ee05c12ef90f3e1968b61cf2fe6.scope: Deactivated successfully.
Jan 22 00:09:28 compute-0 podman[276346]: 2026-01-22 00:09:28.214246198 +0000 UTC m=+1.108775578 container died 558f5281c7c2b55b7fb28837f854890b53452ee05c12ef90f3e1968b61cf2fe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_grothendieck, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:09:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-1db9654fdcf355efc2d41560dad348bdcd62fd84d507247d24dc29259838489e-merged.mount: Deactivated successfully.
Jan 22 00:09:28 compute-0 podman[276346]: 2026-01-22 00:09:28.27788747 +0000 UTC m=+1.172416780 container remove 558f5281c7c2b55b7fb28837f854890b53452ee05c12ef90f3e1968b61cf2fe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_grothendieck, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 00:09:28 compute-0 systemd[1]: libpod-conmon-558f5281c7c2b55b7fb28837f854890b53452ee05c12ef90f3e1968b61cf2fe6.scope: Deactivated successfully.
Jan 22 00:09:28 compute-0 sudo[276238]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:09:28 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:09:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:09:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:09:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:09:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:28.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:09:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:28 compute-0 nova_compute[247516]: 2026-01-22 00:09:28.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:09:29 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:09:29 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev ae1d1a55-97c7-459c-9050-3fa80fee9880 does not exist
Jan 22 00:09:29 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 4abbd104-78e2-4aa1-9093-0342d43b626a does not exist
Jan 22 00:09:29 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev c4b9ba98-283c-4040-9179-7939787b0239 does not exist
Jan 22 00:09:29 compute-0 sudo[276399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:09:29 compute-0 sudo[276399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:29 compute-0 sudo[276399]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:29 compute-0 sudo[276424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:09:29 compute-0 sudo[276424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:29 compute-0 sudo[276424]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:29.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:09:30 compute-0 ceph-mon[74318]: pgmap v1555: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:09:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:30.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:30 compute-0 nova_compute[247516]: 2026-01-22 00:09:30.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:09:31 compute-0 ceph-mon[74318]: pgmap v1556: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:31.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:31 compute-0 nova_compute[247516]: 2026-01-22 00:09:31.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:09:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:32.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:33 compute-0 ceph-mon[74318]: pgmap v1557: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:33.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:09:33 compute-0 podman[276452]: 2026-01-22 00:09:33.986661613 +0000 UTC m=+0.087116132 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 00:09:33 compute-0 nova_compute[247516]: 2026-01-22 00:09:33.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:09:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.002000063s ======
Jan 22 00:09:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:34.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Jan 22 00:09:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:34 compute-0 nova_compute[247516]: 2026-01-22 00:09:34.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:09:35 compute-0 ceph-mon[74318]: pgmap v1558: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:35 compute-0 nova_compute[247516]: 2026-01-22 00:09:35.023 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:09:35 compute-0 nova_compute[247516]: 2026-01-22 00:09:35.024 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:09:35 compute-0 nova_compute[247516]: 2026-01-22 00:09:35.025 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:09:35 compute-0 nova_compute[247516]: 2026-01-22 00:09:35.025 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:09:35 compute-0 nova_compute[247516]: 2026-01-22 00:09:35.026 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:09:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:09:35 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4025541326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:09:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:35.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:35 compute-0 nova_compute[247516]: 2026-01-22 00:09:35.487 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:09:35 compute-0 nova_compute[247516]: 2026-01-22 00:09:35.644 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:09:35 compute-0 nova_compute[247516]: 2026-01-22 00:09:35.645 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5092MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:09:35 compute-0 nova_compute[247516]: 2026-01-22 00:09:35.645 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:09:35 compute-0 nova_compute[247516]: 2026-01-22 00:09:35.646 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:09:35 compute-0 nova_compute[247516]: 2026-01-22 00:09:35.786 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:09:35 compute-0 nova_compute[247516]: 2026-01-22 00:09:35.787 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:09:35 compute-0 nova_compute[247516]: 2026-01-22 00:09:35.788 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:09:35 compute-0 nova_compute[247516]: 2026-01-22 00:09:35.870 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:09:36 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/4025541326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:09:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:09:36 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2662177514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:09:36 compute-0 nova_compute[247516]: 2026-01-22 00:09:36.376 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:09:36 compute-0 nova_compute[247516]: 2026-01-22 00:09:36.381 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:09:36 compute-0 nova_compute[247516]: 2026-01-22 00:09:36.401 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:09:36 compute-0 nova_compute[247516]: 2026-01-22 00:09:36.403 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:09:36 compute-0 nova_compute[247516]: 2026-01-22 00:09:36.403 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:09:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:36.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:37 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2662177514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:09:37 compute-0 ceph-mon[74318]: pgmap v1559: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:37 compute-0 sudo[276518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:09:37 compute-0 sudo[276518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:37 compute-0 sudo[276518]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:37 compute-0 sudo[276543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:09:37 compute-0 sudo[276543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:37 compute-0 sudo[276543]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:37.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:38 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:09:38.075 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 00:09:38 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:09:38.079 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 00:09:38 compute-0 nova_compute[247516]: 2026-01-22 00:09:38.405 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:09:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:09:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:09:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:38.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:09:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:39 compute-0 ceph-mon[74318]: pgmap v1560: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:09:39
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'images', 'vms', 'default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.meta']
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:09:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:39.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:09:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:09:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:09:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:40.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:09:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:41 compute-0 ceph-mon[74318]: pgmap v1561: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:41.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:42.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:43 compute-0 ceph-mon[74318]: pgmap v1562: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:43.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:09:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:44.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:45 compute-0 ceph-mon[74318]: pgmap v1563: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:45.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:46 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:09:46.083 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 00:09:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:46.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:47 compute-0 ceph-mon[74318]: pgmap v1564: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:09:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:47.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:09:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:09:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:09:48.770 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:09:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:09:48.771 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:09:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:09:48.771 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:09:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:48.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:49 compute-0 ceph-mon[74318]: pgmap v1565: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:49.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:09:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:50.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:09:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:51.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:51 compute-0 ceph-mon[74318]: pgmap v1566: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:09:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:52.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:09:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:53 compute-0 ceph-mon[74318]: pgmap v1567: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:53.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:09:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:09:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:54.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:09:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:55 compute-0 ceph-mon[74318]: pgmap v1568: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:09:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:55.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:09:55 compute-0 podman[276578]: 2026-01-22 00:09:55.672696287 +0000 UTC m=+0.136345629 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 00:09:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:09:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:56.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:09:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:57 compute-0 ceph-mon[74318]: pgmap v1569: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:57 compute-0 sudo[276604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:09:57 compute-0 sudo[276604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:57 compute-0 sudo[276604]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:57 compute-0 sudo[276629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:09:57 compute-0 sudo[276629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:09:57 compute-0 sudo[276629]: pam_unix(sudo:session): session closed for user root
Jan 22 00:09:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:09:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:57.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:09:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:09:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:09:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:09:58.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:09:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:59 compute-0 ceph-mon[74318]: pgmap v1570: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:09:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:09:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:09:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:09:59.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:10:00 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 22 00:10:00 compute-0 ceph-mon[74318]: overall HEALTH_OK
Jan 22 00:10:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:00.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:01.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:01 compute-0 ceph-mon[74318]: pgmap v1571: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:10:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:02.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:10:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:03.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:03 compute-0 ceph-mon[74318]: pgmap v1572: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:10:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:10:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:04.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:10:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:04 compute-0 podman[276658]: 2026-01-22 00:10:04.975239848 +0000 UTC m=+0.079515467 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 00:10:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:10:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:05.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:10:05 compute-0 ceph-mon[74318]: pgmap v1573: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:06.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:07 compute-0 ceph-mon[74318]: pgmap v1574: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:07.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:10:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:10:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:08.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:10:08 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:10:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:10:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:10:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:10:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:10:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:10:09 compute-0 ceph-mon[74318]: pgmap v1575: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:09.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:10:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:10.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:10:10 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:11.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:12 compute-0 ceph-mon[74318]: pgmap v1576: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:10:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:12.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:10:12 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:13.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:10:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:10:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:14.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:10:14 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:15 compute-0 ceph-mon[74318]: pgmap v1577: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:15.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:10:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:16.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:10:16 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:17 compute-0 ceph-mon[74318]: pgmap v1578: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:17 compute-0 sudo[276683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:10:17 compute-0 sudo[276683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:17 compute-0 sudo[276683]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:17.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:17 compute-0 sudo[276709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:10:17 compute-0 sudo[276709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:17 compute-0 sudo[276709]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:17 compute-0 nova_compute[247516]: 2026-01-22 00:10:17.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:10:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:10:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:18.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:18 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:19 compute-0 ceph-mon[74318]: pgmap v1579: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:19.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:20 compute-0 ceph-mon[74318]: pgmap v1580: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:10:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:20.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:10:20 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:21 compute-0 nova_compute[247516]: 2026-01-22 00:10:21.078 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:10:21 compute-0 nova_compute[247516]: 2026-01-22 00:10:21.079 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:10:21 compute-0 nova_compute[247516]: 2026-01-22 00:10:21.079 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:10:21 compute-0 nova_compute[247516]: 2026-01-22 00:10:21.361 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:10:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:21.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:21 compute-0 nova_compute[247516]: 2026-01-22 00:10:21.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:10:21 compute-0 nova_compute[247516]: 2026-01-22 00:10:21.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 22 00:10:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:22.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:22 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:23 compute-0 ceph-mon[74318]: pgmap v1581: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1287105652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:10:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3623711032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:10:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:23.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:10:24 compute-0 ceph-mon[74318]: pgmap v1582: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:10:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:24.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:10:24 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:25.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 00:10:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/325325643' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:10:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 00:10:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/325325643' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:10:26 compute-0 nova_compute[247516]: 2026-01-22 00:10:26.009 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:10:26 compute-0 nova_compute[247516]: 2026-01-22 00:10:26.009 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:10:26 compute-0 podman[276738]: 2026-01-22 00:10:26.031743234 +0000 UTC m=+0.133484920 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 00:10:26 compute-0 ceph-mon[74318]: pgmap v1583: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/325325643' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:10:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/325325643' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:10:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:26.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:26 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:26 compute-0 nova_compute[247516]: 2026-01-22 00:10:26.987 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:10:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:27.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:28 compute-0 ceph-mon[74318]: pgmap v1584: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:10:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:28.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:28 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:29.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:29 compute-0 sudo[276767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:10:29 compute-0 sudo[276767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:29 compute-0 sudo[276767]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:29 compute-0 sudo[276792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:10:29 compute-0 sudo[276792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:29 compute-0 sudo[276792]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1081665592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:10:29 compute-0 sudo[276817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:10:29 compute-0 sudo[276817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:29 compute-0 sudo[276817]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:29 compute-0 sudo[276842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 00:10:29 compute-0 sudo[276842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:30 compute-0 sudo[276842]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:10:30 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:10:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:10:30 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:10:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:10:30 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:10:30 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 41e3855c-9a93-4434-ba2a-bf0b8cf3daf1 does not exist
Jan 22 00:10:30 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 0f185ac9-0353-4db9-b4f9-f097e0e8e168 does not exist
Jan 22 00:10:30 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 52d1b3ba-2f84-44dc-b0d8-dc32f1788269 does not exist
Jan 22 00:10:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:10:30 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:10:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:10:30 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:10:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:10:30 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:10:30 compute-0 sudo[276900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:10:30 compute-0 sudo[276900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:30 compute-0 sudo[276900]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:30 compute-0 sudo[276925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:10:30 compute-0 sudo[276925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:30 compute-0 sudo[276925]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:30 compute-0 sudo[276950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:10:30 compute-0 sudo[276950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:30 compute-0 sudo[276950]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:30 compute-0 ceph-mon[74318]: pgmap v1585: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:30 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2350231521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:10:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:10:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:10:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:10:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:10:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:10:30 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:10:30 compute-0 sudo[276975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:10:30 compute-0 sudo[276975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:30.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:30 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:30 compute-0 nova_compute[247516]: 2026-01-22 00:10:30.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:10:31 compute-0 podman[277040]: 2026-01-22 00:10:31.086572542 +0000 UTC m=+0.042463868 container create be0fb70648fbd0e6f8b1c9e7b046c9f654624bdecf1f3be541e9d20d4610d71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 00:10:31 compute-0 systemd[1]: Started libpod-conmon-be0fb70648fbd0e6f8b1c9e7b046c9f654624bdecf1f3be541e9d20d4610d71e.scope.
Jan 22 00:10:31 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:10:31 compute-0 podman[277040]: 2026-01-22 00:10:31.06879942 +0000 UTC m=+0.024690726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:10:31 compute-0 podman[277040]: 2026-01-22 00:10:31.182290179 +0000 UTC m=+0.138181495 container init be0fb70648fbd0e6f8b1c9e7b046c9f654624bdecf1f3be541e9d20d4610d71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hugle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:10:31 compute-0 podman[277040]: 2026-01-22 00:10:31.195192379 +0000 UTC m=+0.151083675 container start be0fb70648fbd0e6f8b1c9e7b046c9f654624bdecf1f3be541e9d20d4610d71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:10:31 compute-0 podman[277040]: 2026-01-22 00:10:31.200022078 +0000 UTC m=+0.155913374 container attach be0fb70648fbd0e6f8b1c9e7b046c9f654624bdecf1f3be541e9d20d4610d71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hugle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 00:10:31 compute-0 optimistic_hugle[277057]: 167 167
Jan 22 00:10:31 compute-0 systemd[1]: libpod-be0fb70648fbd0e6f8b1c9e7b046c9f654624bdecf1f3be541e9d20d4610d71e.scope: Deactivated successfully.
Jan 22 00:10:31 compute-0 conmon[277057]: conmon be0fb70648fbd0e6f8b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be0fb70648fbd0e6f8b1c9e7b046c9f654624bdecf1f3be541e9d20d4610d71e.scope/container/memory.events
Jan 22 00:10:31 compute-0 podman[277040]: 2026-01-22 00:10:31.212418933 +0000 UTC m=+0.168310259 container died be0fb70648fbd0e6f8b1c9e7b046c9f654624bdecf1f3be541e9d20d4610d71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-69d27c2303fd2f5cbc36ce96834551cb3ddccd610cbbb0d50bd7d75080dfdd9f-merged.mount: Deactivated successfully.
Jan 22 00:10:31 compute-0 podman[277040]: 2026-01-22 00:10:31.263973162 +0000 UTC m=+0.219864468 container remove be0fb70648fbd0e6f8b1c9e7b046c9f654624bdecf1f3be541e9d20d4610d71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 00:10:31 compute-0 systemd[1]: libpod-conmon-be0fb70648fbd0e6f8b1c9e7b046c9f654624bdecf1f3be541e9d20d4610d71e.scope: Deactivated successfully.
Jan 22 00:10:31 compute-0 podman[277079]: 2026-01-22 00:10:31.439168313 +0000 UTC m=+0.048109093 container create 4eeb4408b396cb7030d8e16a846ac24cf08c762ba38f91be2204d08ee69f26a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elion, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 00:10:31 compute-0 systemd[1]: Started libpod-conmon-4eeb4408b396cb7030d8e16a846ac24cf08c762ba38f91be2204d08ee69f26a9.scope.
Jan 22 00:10:31 compute-0 podman[277079]: 2026-01-22 00:10:31.416394727 +0000 UTC m=+0.025335557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:10:31 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d16847826cd0b678da12d5208ffa54879a1bb6f330d7ea5e91c5da768140d81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d16847826cd0b678da12d5208ffa54879a1bb6f330d7ea5e91c5da768140d81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d16847826cd0b678da12d5208ffa54879a1bb6f330d7ea5e91c5da768140d81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d16847826cd0b678da12d5208ffa54879a1bb6f330d7ea5e91c5da768140d81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d16847826cd0b678da12d5208ffa54879a1bb6f330d7ea5e91c5da768140d81/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:10:31 compute-0 podman[277079]: 2026-01-22 00:10:31.528241195 +0000 UTC m=+0.137182005 container init 4eeb4408b396cb7030d8e16a846ac24cf08c762ba38f91be2204d08ee69f26a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elion, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 00:10:31 compute-0 podman[277079]: 2026-01-22 00:10:31.535897493 +0000 UTC m=+0.144838273 container start 4eeb4408b396cb7030d8e16a846ac24cf08c762ba38f91be2204d08ee69f26a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elion, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:10:31 compute-0 podman[277079]: 2026-01-22 00:10:31.539968708 +0000 UTC m=+0.148909488 container attach 4eeb4408b396cb7030d8e16a846ac24cf08c762ba38f91be2204d08ee69f26a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elion, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Jan 22 00:10:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:10:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:31.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:10:32 compute-0 elated_elion[277096]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:10:32 compute-0 elated_elion[277096]: --> relative data size: 1.0
Jan 22 00:10:32 compute-0 elated_elion[277096]: --> All data devices are unavailable
Jan 22 00:10:32 compute-0 systemd[1]: libpod-4eeb4408b396cb7030d8e16a846ac24cf08c762ba38f91be2204d08ee69f26a9.scope: Deactivated successfully.
Jan 22 00:10:32 compute-0 podman[277079]: 2026-01-22 00:10:32.47249631 +0000 UTC m=+1.081437110 container died 4eeb4408b396cb7030d8e16a846ac24cf08c762ba38f91be2204d08ee69f26a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:10:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d16847826cd0b678da12d5208ffa54879a1bb6f330d7ea5e91c5da768140d81-merged.mount: Deactivated successfully.
Jan 22 00:10:32 compute-0 podman[277079]: 2026-01-22 00:10:32.5369872 +0000 UTC m=+1.145927980 container remove 4eeb4408b396cb7030d8e16a846ac24cf08c762ba38f91be2204d08ee69f26a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elion, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:10:32 compute-0 systemd[1]: libpod-conmon-4eeb4408b396cb7030d8e16a846ac24cf08c762ba38f91be2204d08ee69f26a9.scope: Deactivated successfully.
Jan 22 00:10:32 compute-0 sudo[276975]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:32 compute-0 sudo[277124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:10:32 compute-0 sudo[277124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:32 compute-0 sudo[277124]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:32 compute-0 sudo[277149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:10:32 compute-0 sudo[277149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:32 compute-0 sudo[277149]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:32 compute-0 sudo[277174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:10:32 compute-0 sudo[277174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:32 compute-0 sudo[277174]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:32 compute-0 ceph-mon[74318]: pgmap v1586: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:32 compute-0 sudo[277199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:10:32 compute-0 sudo[277199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:32.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:32 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:32 compute-0 nova_compute[247516]: 2026-01-22 00:10:32.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:10:33 compute-0 podman[277265]: 2026-01-22 00:10:33.19176889 +0000 UTC m=+0.041424585 container create 7c61f87c9d2078bbfe04552cfe92709fbe2ee7299d41ef4fa85465da8b3e6b7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_northcutt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 00:10:33 compute-0 systemd[1]: Started libpod-conmon-7c61f87c9d2078bbfe04552cfe92709fbe2ee7299d41ef4fa85465da8b3e6b7e.scope.
Jan 22 00:10:33 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:10:33 compute-0 podman[277265]: 2026-01-22 00:10:33.270029086 +0000 UTC m=+0.119684801 container init 7c61f87c9d2078bbfe04552cfe92709fbe2ee7299d41ef4fa85465da8b3e6b7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:10:33 compute-0 podman[277265]: 2026-01-22 00:10:33.17564514 +0000 UTC m=+0.025300855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:10:33 compute-0 podman[277265]: 2026-01-22 00:10:33.279194611 +0000 UTC m=+0.128850306 container start 7c61f87c9d2078bbfe04552cfe92709fbe2ee7299d41ef4fa85465da8b3e6b7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_northcutt, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 00:10:33 compute-0 podman[277265]: 2026-01-22 00:10:33.284296419 +0000 UTC m=+0.133952144 container attach 7c61f87c9d2078bbfe04552cfe92709fbe2ee7299d41ef4fa85465da8b3e6b7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_northcutt, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:10:33 compute-0 affectionate_northcutt[277281]: 167 167
Jan 22 00:10:33 compute-0 systemd[1]: libpod-7c61f87c9d2078bbfe04552cfe92709fbe2ee7299d41ef4fa85465da8b3e6b7e.scope: Deactivated successfully.
Jan 22 00:10:33 compute-0 podman[277265]: 2026-01-22 00:10:33.286815417 +0000 UTC m=+0.136471112 container died 7c61f87c9d2078bbfe04552cfe92709fbe2ee7299d41ef4fa85465da8b3e6b7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_northcutt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 00:10:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fc63d43892ba84371ee5b227d3f551cdcdf98af53698c9f7720f4d44c1a0df9-merged.mount: Deactivated successfully.
Jan 22 00:10:33 compute-0 podman[277265]: 2026-01-22 00:10:33.325673181 +0000 UTC m=+0.175328876 container remove 7c61f87c9d2078bbfe04552cfe92709fbe2ee7299d41ef4fa85465da8b3e6b7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_northcutt, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:10:33 compute-0 systemd[1]: libpod-conmon-7c61f87c9d2078bbfe04552cfe92709fbe2ee7299d41ef4fa85465da8b3e6b7e.scope: Deactivated successfully.
Jan 22 00:10:33 compute-0 podman[277307]: 2026-01-22 00:10:33.502033699 +0000 UTC m=+0.045678586 container create a6ad3d60255a576827c7626f9cfcb3b7797e060abc7c2dd0729b7b9a777d00f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 22 00:10:33 compute-0 systemd[1]: Started libpod-conmon-a6ad3d60255a576827c7626f9cfcb3b7797e060abc7c2dd0729b7b9a777d00f8.scope.
Jan 22 00:10:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:10:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:33.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:10:33 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1486bde25259eaafe961d3d1a119b98d1ffff1149506c78a9eb5cd8f4d6366/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1486bde25259eaafe961d3d1a119b98d1ffff1149506c78a9eb5cd8f4d6366/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1486bde25259eaafe961d3d1a119b98d1ffff1149506c78a9eb5cd8f4d6366/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1486bde25259eaafe961d3d1a119b98d1ffff1149506c78a9eb5cd8f4d6366/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:10:33 compute-0 podman[277307]: 2026-01-22 00:10:33.482256476 +0000 UTC m=+0.025901393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:10:33 compute-0 podman[277307]: 2026-01-22 00:10:33.582777372 +0000 UTC m=+0.126422269 container init a6ad3d60255a576827c7626f9cfcb3b7797e060abc7c2dd0729b7b9a777d00f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:10:33 compute-0 podman[277307]: 2026-01-22 00:10:33.589075168 +0000 UTC m=+0.132720065 container start a6ad3d60255a576827c7626f9cfcb3b7797e060abc7c2dd0729b7b9a777d00f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 00:10:33 compute-0 podman[277307]: 2026-01-22 00:10:33.592730322 +0000 UTC m=+0.136375229 container attach a6ad3d60255a576827c7626f9cfcb3b7797e060abc7c2dd0729b7b9a777d00f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 00:10:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:10:33 compute-0 nova_compute[247516]: 2026-01-22 00:10:33.994 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:10:34 compute-0 determined_austin[277324]: {
Jan 22 00:10:34 compute-0 determined_austin[277324]:     "1": [
Jan 22 00:10:34 compute-0 determined_austin[277324]:         {
Jan 22 00:10:34 compute-0 determined_austin[277324]:             "devices": [
Jan 22 00:10:34 compute-0 determined_austin[277324]:                 "/dev/loop3"
Jan 22 00:10:34 compute-0 determined_austin[277324]:             ],
Jan 22 00:10:34 compute-0 determined_austin[277324]:             "lv_name": "ceph_lv0",
Jan 22 00:10:34 compute-0 determined_austin[277324]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:10:34 compute-0 determined_austin[277324]:             "lv_size": "7511998464",
Jan 22 00:10:34 compute-0 determined_austin[277324]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:10:34 compute-0 determined_austin[277324]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:10:34 compute-0 determined_austin[277324]:             "name": "ceph_lv0",
Jan 22 00:10:34 compute-0 determined_austin[277324]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:10:34 compute-0 determined_austin[277324]:             "tags": {
Jan 22 00:10:34 compute-0 determined_austin[277324]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:10:34 compute-0 determined_austin[277324]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:10:34 compute-0 determined_austin[277324]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:10:34 compute-0 determined_austin[277324]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:10:34 compute-0 determined_austin[277324]:                 "ceph.cluster_name": "ceph",
Jan 22 00:10:34 compute-0 determined_austin[277324]:                 "ceph.crush_device_class": "",
Jan 22 00:10:34 compute-0 determined_austin[277324]:                 "ceph.encrypted": "0",
Jan 22 00:10:34 compute-0 determined_austin[277324]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:10:34 compute-0 determined_austin[277324]:                 "ceph.osd_id": "1",
Jan 22 00:10:34 compute-0 determined_austin[277324]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:10:34 compute-0 determined_austin[277324]:                 "ceph.type": "block",
Jan 22 00:10:34 compute-0 determined_austin[277324]:                 "ceph.vdo": "0"
Jan 22 00:10:34 compute-0 determined_austin[277324]:             },
Jan 22 00:10:34 compute-0 determined_austin[277324]:             "type": "block",
Jan 22 00:10:34 compute-0 determined_austin[277324]:             "vg_name": "ceph_vg0"
Jan 22 00:10:34 compute-0 determined_austin[277324]:         }
Jan 22 00:10:34 compute-0 determined_austin[277324]:     ]
Jan 22 00:10:34 compute-0 determined_austin[277324]: }
Jan 22 00:10:34 compute-0 systemd[1]: libpod-a6ad3d60255a576827c7626f9cfcb3b7797e060abc7c2dd0729b7b9a777d00f8.scope: Deactivated successfully.
Jan 22 00:10:34 compute-0 podman[277333]: 2026-01-22 00:10:34.524276832 +0000 UTC m=+0.042662334 container died a6ad3d60255a576827c7626f9cfcb3b7797e060abc7c2dd0729b7b9a777d00f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:10:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f1486bde25259eaafe961d3d1a119b98d1ffff1149506c78a9eb5cd8f4d6366-merged.mount: Deactivated successfully.
Jan 22 00:10:34 compute-0 podman[277333]: 2026-01-22 00:10:34.578858575 +0000 UTC m=+0.097244077 container remove a6ad3d60255a576827c7626f9cfcb3b7797e060abc7c2dd0729b7b9a777d00f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:10:34 compute-0 systemd[1]: libpod-conmon-a6ad3d60255a576827c7626f9cfcb3b7797e060abc7c2dd0729b7b9a777d00f8.scope: Deactivated successfully.
Jan 22 00:10:34 compute-0 sudo[277199]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:34 compute-0 sudo[277348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:10:34 compute-0 sudo[277348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:34 compute-0 sudo[277348]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:34 compute-0 sudo[277373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:10:34 compute-0 sudo[277373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:34 compute-0 sudo[277373]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:34 compute-0 sudo[277398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:10:34 compute-0 sudo[277398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:34 compute-0 sudo[277398]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:34.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:34 compute-0 ceph-mon[74318]: pgmap v1587: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:34 compute-0 sudo[277423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:10:34 compute-0 sudo[277423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:34 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:34 compute-0 nova_compute[247516]: 2026-01-22 00:10:34.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:10:34 compute-0 nova_compute[247516]: 2026-01-22 00:10:34.994 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:10:35 compute-0 nova_compute[247516]: 2026-01-22 00:10:35.124 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:10:35 compute-0 nova_compute[247516]: 2026-01-22 00:10:35.125 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:10:35 compute-0 nova_compute[247516]: 2026-01-22 00:10:35.125 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:10:35 compute-0 nova_compute[247516]: 2026-01-22 00:10:35.125 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:10:35 compute-0 nova_compute[247516]: 2026-01-22 00:10:35.126 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:10:35 compute-0 podman[277488]: 2026-01-22 00:10:35.317883678 +0000 UTC m=+0.084202542 container create c2d336c2aa363a26af584faf372c68a8655242f7ff42d63b180f841ead7f70c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:10:35 compute-0 systemd[1]: Started libpod-conmon-c2d336c2aa363a26af584faf372c68a8655242f7ff42d63b180f841ead7f70c2.scope.
Jan 22 00:10:35 compute-0 podman[277488]: 2026-01-22 00:10:35.259193088 +0000 UTC m=+0.025511972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:10:35 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:10:35 compute-0 podman[277488]: 2026-01-22 00:10:35.385247946 +0000 UTC m=+0.151566830 container init c2d336c2aa363a26af584faf372c68a8655242f7ff42d63b180f841ead7f70c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_napier, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:10:35 compute-0 podman[277488]: 2026-01-22 00:10:35.393894273 +0000 UTC m=+0.160213137 container start c2d336c2aa363a26af584faf372c68a8655242f7ff42d63b180f841ead7f70c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_napier, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:10:35 compute-0 podman[277488]: 2026-01-22 00:10:35.399685823 +0000 UTC m=+0.166004697 container attach c2d336c2aa363a26af584faf372c68a8655242f7ff42d63b180f841ead7f70c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 00:10:35 compute-0 elastic_napier[277524]: 167 167
Jan 22 00:10:35 compute-0 podman[277488]: 2026-01-22 00:10:35.401956384 +0000 UTC m=+0.168275248 container died c2d336c2aa363a26af584faf372c68a8655242f7ff42d63b180f841ead7f70c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_napier, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 00:10:35 compute-0 systemd[1]: libpod-c2d336c2aa363a26af584faf372c68a8655242f7ff42d63b180f841ead7f70c2.scope: Deactivated successfully.
Jan 22 00:10:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-3994cf58f7d32f4e78d3ff1d0a61889a5dd82b038b08f524f39c18c8bf3289b3-merged.mount: Deactivated successfully.
Jan 22 00:10:35 compute-0 podman[277488]: 2026-01-22 00:10:35.452775279 +0000 UTC m=+0.219094143 container remove c2d336c2aa363a26af584faf372c68a8655242f7ff42d63b180f841ead7f70c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:10:35 compute-0 podman[277521]: 2026-01-22 00:10:35.468762325 +0000 UTC m=+0.102313653 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:10:35 compute-0 systemd[1]: libpod-conmon-c2d336c2aa363a26af584faf372c68a8655242f7ff42d63b180f841ead7f70c2.scope: Deactivated successfully.
Jan 22 00:10:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:35.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:10:35 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:10:35 compute-0 nova_compute[247516]: 2026-01-22 00:10:35.639 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:10:35 compute-0 podman[277568]: 2026-01-22 00:10:35.657496086 +0000 UTC m=+0.060184006 container create e4593f0dffb379744fd67dc5efe33c55c3e0dbfdfa5b3e473b9d6d54725903b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_snyder, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 00:10:35 compute-0 systemd[1]: Started libpod-conmon-e4593f0dffb379744fd67dc5efe33c55c3e0dbfdfa5b3e473b9d6d54725903b8.scope.
Jan 22 00:10:35 compute-0 podman[277568]: 2026-01-22 00:10:35.634214245 +0000 UTC m=+0.036902225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:10:35 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2516c83792f214708d16410c7582eedb7b67e68a2873c576402586ab876efb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2516c83792f214708d16410c7582eedb7b67e68a2873c576402586ab876efb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2516c83792f214708d16410c7582eedb7b67e68a2873c576402586ab876efb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2516c83792f214708d16410c7582eedb7b67e68a2873c576402586ab876efb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:10:35 compute-0 podman[277568]: 2026-01-22 00:10:35.756370342 +0000 UTC m=+0.159058312 container init e4593f0dffb379744fd67dc5efe33c55c3e0dbfdfa5b3e473b9d6d54725903b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_snyder, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:10:35 compute-0 podman[277568]: 2026-01-22 00:10:35.769668634 +0000 UTC m=+0.172356554 container start e4593f0dffb379744fd67dc5efe33c55c3e0dbfdfa5b3e473b9d6d54725903b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 00:10:35 compute-0 podman[277568]: 2026-01-22 00:10:35.773165922 +0000 UTC m=+0.175853842 container attach e4593f0dffb379744fd67dc5efe33c55c3e0dbfdfa5b3e473b9d6d54725903b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:10:35 compute-0 nova_compute[247516]: 2026-01-22 00:10:35.843 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:10:35 compute-0 nova_compute[247516]: 2026-01-22 00:10:35.845 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5106MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:10:35 compute-0 nova_compute[247516]: 2026-01-22 00:10:35.846 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:10:35 compute-0 nova_compute[247516]: 2026-01-22 00:10:35.846 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:10:35 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3359267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:10:36 compute-0 nova_compute[247516]: 2026-01-22 00:10:36.016 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:10:36 compute-0 nova_compute[247516]: 2026-01-22 00:10:36.017 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:10:36 compute-0 nova_compute[247516]: 2026-01-22 00:10:36.017 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:10:36 compute-0 nova_compute[247516]: 2026-01-22 00:10:36.100 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:10:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:10:36 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/939321466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:10:36 compute-0 nova_compute[247516]: 2026-01-22 00:10:36.603 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:10:36 compute-0 nova_compute[247516]: 2026-01-22 00:10:36.612 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:10:36 compute-0 nova_compute[247516]: 2026-01-22 00:10:36.638 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:10:36 compute-0 nova_compute[247516]: 2026-01-22 00:10:36.640 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:10:36 compute-0 nova_compute[247516]: 2026-01-22 00:10:36.640 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:10:36 compute-0 nova_compute[247516]: 2026-01-22 00:10:36.641 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:10:36 compute-0 nova_compute[247516]: 2026-01-22 00:10:36.642 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 22 00:10:36 compute-0 nova_compute[247516]: 2026-01-22 00:10:36.662 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 22 00:10:36 compute-0 gifted_snyder[277587]: {
Jan 22 00:10:36 compute-0 gifted_snyder[277587]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:10:36 compute-0 gifted_snyder[277587]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:10:36 compute-0 gifted_snyder[277587]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:10:36 compute-0 gifted_snyder[277587]:         "osd_id": 1,
Jan 22 00:10:36 compute-0 gifted_snyder[277587]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:10:36 compute-0 gifted_snyder[277587]:         "type": "bluestore"
Jan 22 00:10:36 compute-0 gifted_snyder[277587]:     }
Jan 22 00:10:36 compute-0 gifted_snyder[277587]: }
Jan 22 00:10:36 compute-0 systemd[1]: libpod-e4593f0dffb379744fd67dc5efe33c55c3e0dbfdfa5b3e473b9d6d54725903b8.scope: Deactivated successfully.
Jan 22 00:10:36 compute-0 podman[277568]: 2026-01-22 00:10:36.761943668 +0000 UTC m=+1.164631588 container died e4593f0dffb379744fd67dc5efe33c55c3e0dbfdfa5b3e473b9d6d54725903b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 00:10:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac2516c83792f214708d16410c7582eedb7b67e68a2873c576402586ab876efb-merged.mount: Deactivated successfully.
Jan 22 00:10:36 compute-0 podman[277568]: 2026-01-22 00:10:36.835025124 +0000 UTC m=+1.237713044 container remove e4593f0dffb379744fd67dc5efe33c55c3e0dbfdfa5b3e473b9d6d54725903b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_snyder, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 00:10:36 compute-0 systemd[1]: libpod-conmon-e4593f0dffb379744fd67dc5efe33c55c3e0dbfdfa5b3e473b9d6d54725903b8.scope: Deactivated successfully.
Jan 22 00:10:36 compute-0 sudo[277423]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:36.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:10:36 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:10:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:10:36 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:10:36 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev dca8c290-9f17-4f5f-b9f7-657bfd3346fd does not exist
Jan 22 00:10:36 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev eacb6745-5228-4e56-8a3b-d59e05a3df4f does not exist
Jan 22 00:10:36 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 1113b693-11f0-4542-bba3-d367eb0a34af does not exist
Jan 22 00:10:36 compute-0 ceph-mon[74318]: pgmap v1588: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:36 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/939321466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:10:36 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:10:36 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:10:36 compute-0 sudo[277642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:10:36 compute-0 sudo[277642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:36 compute-0 sudo[277642]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:36 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:37 compute-0 sudo[277667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:10:37 compute-0 sudo[277667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:37 compute-0 sudo[277667]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:37.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:37 compute-0 sudo[277693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:10:37 compute-0 sudo[277693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:37 compute-0 sudo[277693]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:37 compute-0 sudo[277718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:10:37 compute-0 sudo[277718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:37 compute-0 sudo[277718]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:10:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:10:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:38.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:10:38 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:39 compute-0 ceph-mon[74318]: pgmap v1589: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:39 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:10:39.065 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 00:10:39 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:10:39.067 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:10:39
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['default.rgw.control', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'vms']
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:10:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:39.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:10:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:10:39 compute-0 nova_compute[247516]: 2026-01-22 00:10:39.662 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:10:40 compute-0 ceph-mon[74318]: pgmap v1590: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:10:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:40.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:10:40 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:41.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:42 compute-0 ceph-mon[74318]: pgmap v1591: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:10:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:42.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:10:42 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:43.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:10:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:44.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:44 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:45 compute-0 ceph-mon[74318]: pgmap v1592: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:45.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:46 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:10:46.071 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 00:10:46 compute-0 ceph-mon[74318]: pgmap v1593: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:46.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:46 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:47.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:48 compute-0 ceph-mon[74318]: pgmap v1594: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:10:48.771 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:10:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:10:48.772 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:10:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:10:48.772 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:10:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:10:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:48.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:48 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:49.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:50 compute-0 ceph-mon[74318]: pgmap v1595: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:10:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:50.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:10:50 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:51.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:52 compute-0 ceph-mon[74318]: pgmap v1596: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:10:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:52.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:10:52 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:53.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:10:54 compute-0 ceph-mon[74318]: pgmap v1597: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:10:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:54.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:10:54 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:55.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:10:56 compute-0 ceph-mon[74318]: pgmap v1598: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:10:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:56.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:10:56 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:57 compute-0 podman[277752]: 2026-01-22 00:10:57.003173776 +0000 UTC m=+0.102872921 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 22 00:10:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:10:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:57.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:10:57 compute-0 sudo[277782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:10:57 compute-0 sudo[277782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:57 compute-0 sudo[277782]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:57 compute-0 sudo[277807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:10:57 compute-0 sudo[277807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:10:57 compute-0 sudo[277807]: pam_unix(sudo:session): session closed for user root
Jan 22 00:10:58 compute-0 ceph-mon[74318]: pgmap v1599: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:10:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:10:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:10:58.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:10:58 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:10:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:10:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:10:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:10:59.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:00 compute-0 ceph-mon[74318]: pgmap v1600: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:00.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:00 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:01.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:02 compute-0 ceph-mon[74318]: pgmap v1601: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:11:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:02.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:11:02 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:03.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:11:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:11:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:04.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:11:04 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:05 compute-0 ceph-mon[74318]: pgmap v1602: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:05.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:05 compute-0 podman[277836]: 2026-01-22 00:11:05.94843829 +0000 UTC m=+0.052476277 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 00:11:06 compute-0 ceph-mon[74318]: pgmap v1603: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:06.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:06 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:07.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:11:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:08.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:09 compute-0 ceph-mon[74318]: pgmap v1604: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:11:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:11:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:11:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:11:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:11:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:11:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:09.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:10 compute-0 ceph-mon[74318]: pgmap v1605: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:10.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:11.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:12 compute-0 ceph-mon[74318]: pgmap v1606: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:11:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:12.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:11:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:13.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:11:14 compute-0 ceph-mon[74318]: pgmap v1607: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:11:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:14.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:11:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:15.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:16 compute-0 ceph-mon[74318]: pgmap v1608: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:11:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:16.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:11:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:11:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:17.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:11:17 compute-0 sudo[277861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:11:17 compute-0 sudo[277861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:17 compute-0 sudo[277861]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:17 compute-0 sudo[277886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:11:17 compute-0 sudo[277886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:18 compute-0 sudo[277886]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:18 compute-0 ceph-mon[74318]: pgmap v1609: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:11:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:18.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:11:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:19.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:11:20 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:11:20.564 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 00:11:20 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:11:20.565 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 00:11:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:20.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:21 compute-0 ceph-mon[74318]: pgmap v1610: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:21.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:21 compute-0 nova_compute[247516]: 2026-01-22 00:11:21.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:11:21 compute-0 nova_compute[247516]: 2026-01-22 00:11:21.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:11:21 compute-0 nova_compute[247516]: 2026-01-22 00:11:21.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:11:22 compute-0 nova_compute[247516]: 2026-01-22 00:11:22.072 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:11:22 compute-0 ceph-mon[74318]: pgmap v1611: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:11:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:22.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:11:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3830845013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:11:23 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:11:23.568 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 00:11:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:23.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:11:24 compute-0 ceph-mon[74318]: pgmap v1612: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:11:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:24.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:11:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:25.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:26 compute-0 ceph-mon[74318]: pgmap v1613: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1460312212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:11:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:26.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:27.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/38803414' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:11:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/38803414' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:11:27 compute-0 podman[277916]: 2026-01-22 00:11:27.975867246 +0000 UTC m=+0.083385356 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 22 00:11:27 compute-0 nova_compute[247516]: 2026-01-22 00:11:27.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:11:27 compute-0 nova_compute[247516]: 2026-01-22 00:11:27.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:11:27 compute-0 nova_compute[247516]: 2026-01-22 00:11:27.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:11:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:11:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:28 compute-0 ceph-mon[74318]: pgmap v1614: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:28.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:28 compute-0 nova_compute[247516]: 2026-01-22 00:11:28.987 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:11:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:29.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:30 compute-0 ceph-mon[74318]: pgmap v1615: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:11:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:30.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:11:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:31.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:32 compute-0 nova_compute[247516]: 2026-01-22 00:11:32.395 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:11:32 compute-0 ceph-mon[74318]: pgmap v1616: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:32.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:33.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:11:33 compute-0 nova_compute[247516]: 2026-01-22 00:11:33.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:11:33 compute-0 nova_compute[247516]: 2026-01-22 00:11:33.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:11:34 compute-0 ceph-mon[74318]: pgmap v1617: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:34.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:35 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1666800747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:11:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:35.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:35 compute-0 nova_compute[247516]: 2026-01-22 00:11:35.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:11:36 compute-0 ceph-mon[74318]: pgmap v1618: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:36.612976) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040696613067, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2106, "num_deletes": 251, "total_data_size": 3865347, "memory_usage": 3941296, "flush_reason": "Manual Compaction"}
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040696661191, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 3798287, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33816, "largest_seqno": 35920, "table_properties": {"data_size": 3788907, "index_size": 5938, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19033, "raw_average_key_size": 20, "raw_value_size": 3770188, "raw_average_value_size": 3998, "num_data_blocks": 261, "num_entries": 943, "num_filter_entries": 943, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769040470, "oldest_key_time": 1769040470, "file_creation_time": 1769040696, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 48320 microseconds, and 12658 cpu microseconds.
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:36.661293) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 3798287 bytes OK
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:36.661329) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:36.663398) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:36.663416) EVENT_LOG_v1 {"time_micros": 1769040696663410, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:36.663444) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 3856906, prev total WAL file size 3856906, number of live WAL files 2.
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:36.664446) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(3709KB)], [74(8524KB)]
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040696664691, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 12527222, "oldest_snapshot_seqno": -1}
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6104 keys, 10527724 bytes, temperature: kUnknown
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040696795020, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 10527724, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10486235, "index_size": 25123, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 155366, "raw_average_key_size": 25, "raw_value_size": 10375520, "raw_average_value_size": 1699, "num_data_blocks": 1015, "num_entries": 6104, "num_filter_entries": 6104, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769040696, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:36.795301) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10527724 bytes
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:36.797627) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 96.0 rd, 80.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.3 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 6619, records dropped: 515 output_compression: NoCompression
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:36.797691) EVENT_LOG_v1 {"time_micros": 1769040696797666, "job": 42, "event": "compaction_finished", "compaction_time_micros": 130433, "compaction_time_cpu_micros": 39953, "output_level": 6, "num_output_files": 1, "total_output_size": 10527724, "num_input_records": 6619, "num_output_records": 6104, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040696798718, "job": 42, "event": "table_file_deletion", "file_number": 76}
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040696800473, "job": 42, "event": "table_file_deletion", "file_number": 74}
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:36.664344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:36.800585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:36.800593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:36.800595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:36.800596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:11:36 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:36.800598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:11:36 compute-0 podman[277946]: 2026-01-22 00:11:36.963358171 +0000 UTC m=+0.083046245 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 00:11:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:36.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:36 compute-0 nova_compute[247516]: 2026-01-22 00:11:36.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:11:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:37 compute-0 sudo[277967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:11:37 compute-0 sudo[277967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:37 compute-0 sudo[277967]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:37 compute-0 sudo[277993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:11:37 compute-0 sudo[277993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:37 compute-0 sudo[277993]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:37 compute-0 sudo[278018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:11:37 compute-0 sudo[278018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:37 compute-0 sudo[278018]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:11:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:37.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:11:37 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3088693565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:11:37 compute-0 sudo[278043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 00:11:37 compute-0 sudo[278043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:38 compute-0 sudo[278134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:11:38 compute-0 sudo[278134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:38 compute-0 sudo[278134]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:38 compute-0 podman[278162]: 2026-01-22 00:11:38.134216233 +0000 UTC m=+0.070725655 container exec 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 00:11:38 compute-0 sudo[278179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:11:38 compute-0 sudo[278179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:38 compute-0 sudo[278179]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:38 compute-0 podman[278162]: 2026-01-22 00:11:38.248715052 +0000 UTC m=+0.185224474 container exec_died 0441eddad81525c34ff42378211568103638b046da9061988c13e4c3da1771a5 (image=quay.io/ceph/ceph:v18, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 00:11:38 compute-0 ceph-mon[74318]: pgmap v1619: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:11:38 compute-0 podman[278350]: 2026-01-22 00:11:38.898941662 +0000 UTC m=+0.064236083 container exec fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 22 00:11:38 compute-0 podman[278350]: 2026-01-22 00:11:38.912068889 +0000 UTC m=+0.077363300 container exec_died fef4dcdca7b63274344ae3ca3f02ede2fff264497511eaba86fd11152273d2fe (image=quay.io/ceph/haproxy:2.3, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-haproxy-rgw-default-compute-0-xtqnkr)
Jan 22 00:11:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 00:11:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:38.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:11:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 00:11:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:39 compute-0 podman[278414]: 2026-01-22 00:11:39.156438145 +0000 UTC m=+0.069892878 container exec 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-type=git, version=2.2.4, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, vendor=Red Hat, Inc., release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container)
Jan 22 00:11:39 compute-0 podman[278414]: 2026-01-22 00:11:39.172030368 +0000 UTC m=+0.085485081 container exec_died 753acd647aed43048c2e914984267ff25fb729cdf5a43c473bf5afd95dba58be (image=quay.io/ceph/keepalived:2.2.4, name=ceph-3759241a-7f1c-520d-ba17-879943ee2f00-keepalived-rgw-default-compute-0-ieqyao, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 22 00:11:39 compute-0 sudo[278043]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:11:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:11:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:11:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:11:39
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'images', 'backups', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log']
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:11:39 compute-0 sudo[278448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:11:39 compute-0 sudo[278448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:39 compute-0 sudo[278448]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:39 compute-0 sudo[278473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:11:39 compute-0 sudo[278473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:39 compute-0 sudo[278473]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:39 compute-0 sudo[278499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:11:39 compute-0 sudo[278499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:39 compute-0 sudo[278499]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:39 compute-0 sudo[278524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 00:11:39 compute-0 sudo[278524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:11:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:39.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:11:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:11:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:11:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:11:40 compute-0 ceph-mon[74318]: pgmap v1620: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:11:40 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:11:40 compute-0 sudo[278524]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:11:40 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:11:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:11:40 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:11:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:11:40 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:11:40 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev b53c53cf-e940-43cf-bd7c-68d079e1d974 does not exist
Jan 22 00:11:40 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 00ba7382-9898-4f3a-b392-6556283f06f3 does not exist
Jan 22 00:11:40 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 24f35ec3-9d21-4c83-b21c-2d351fd051aa does not exist
Jan 22 00:11:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:11:40 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:11:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:11:40 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:11:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:11:40 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:11:40 compute-0 sudo[278580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:11:40 compute-0 sudo[278580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:40 compute-0 sudo[278580]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:40 compute-0 sudo[278605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:11:40 compute-0 sudo[278605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:40 compute-0 sudo[278605]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:40 compute-0 sudo[278630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:11:40 compute-0 sudo[278630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:40 compute-0 sudo[278630]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:40 compute-0 sudo[278655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:11:40 compute-0 sudo[278655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:40 compute-0 podman[278719]: 2026-01-22 00:11:40.735507072 +0000 UTC m=+0.050803436 container create 11ee03243c5090baffd6e92b3ecdfcc8da3ff992e330023b9c153a4d5028a50b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 00:11:40 compute-0 systemd[1]: Started libpod-conmon-11ee03243c5090baffd6e92b3ecdfcc8da3ff992e330023b9c153a4d5028a50b.scope.
Jan 22 00:11:40 compute-0 podman[278719]: 2026-01-22 00:11:40.710545987 +0000 UTC m=+0.025842371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:11:40 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:11:40 compute-0 podman[278719]: 2026-01-22 00:11:40.832723946 +0000 UTC m=+0.148020320 container init 11ee03243c5090baffd6e92b3ecdfcc8da3ff992e330023b9c153a4d5028a50b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:11:40 compute-0 podman[278719]: 2026-01-22 00:11:40.839758593 +0000 UTC m=+0.155054987 container start 11ee03243c5090baffd6e92b3ecdfcc8da3ff992e330023b9c153a4d5028a50b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bell, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:11:40 compute-0 podman[278719]: 2026-01-22 00:11:40.844228032 +0000 UTC m=+0.159524396 container attach 11ee03243c5090baffd6e92b3ecdfcc8da3ff992e330023b9c153a4d5028a50b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:11:40 compute-0 tender_bell[278735]: 167 167
Jan 22 00:11:40 compute-0 systemd[1]: libpod-11ee03243c5090baffd6e92b3ecdfcc8da3ff992e330023b9c153a4d5028a50b.scope: Deactivated successfully.
Jan 22 00:11:40 compute-0 podman[278719]: 2026-01-22 00:11:40.849266678 +0000 UTC m=+0.164563042 container died 11ee03243c5090baffd6e92b3ecdfcc8da3ff992e330023b9c153a4d5028a50b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:11:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-128e0ff3d8edafb2348ec12eff1318adcdcf521adfa3ca98a3f7a831af262030-merged.mount: Deactivated successfully.
Jan 22 00:11:40 compute-0 podman[278719]: 2026-01-22 00:11:40.899388782 +0000 UTC m=+0.214685146 container remove 11ee03243c5090baffd6e92b3ecdfcc8da3ff992e330023b9c153a4d5028a50b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 00:11:40 compute-0 systemd[1]: libpod-conmon-11ee03243c5090baffd6e92b3ecdfcc8da3ff992e330023b9c153a4d5028a50b.scope: Deactivated successfully.
Jan 22 00:11:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:11:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:40.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:11:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:11:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:11:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:11:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:11:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:11:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:11:41 compute-0 podman[278759]: 2026-01-22 00:11:41.07670726 +0000 UTC m=+0.048168134 container create 8871a1f88f4b7930fdeac33ebf728b099402edf3de06c9c2a23bb791c67c9d30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 00:11:41 compute-0 systemd[1]: Started libpod-conmon-8871a1f88f4b7930fdeac33ebf728b099402edf3de06c9c2a23bb791c67c9d30.scope.
Jan 22 00:11:41 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:11:41 compute-0 podman[278759]: 2026-01-22 00:11:41.055698148 +0000 UTC m=+0.027159042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15cabfbf8dad5f3208c4fecdd53879ae467db45ef28c042bde753117019a0d6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15cabfbf8dad5f3208c4fecdd53879ae467db45ef28c042bde753117019a0d6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15cabfbf8dad5f3208c4fecdd53879ae467db45ef28c042bde753117019a0d6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15cabfbf8dad5f3208c4fecdd53879ae467db45ef28c042bde753117019a0d6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15cabfbf8dad5f3208c4fecdd53879ae467db45ef28c042bde753117019a0d6d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:11:41 compute-0 podman[278759]: 2026-01-22 00:11:41.173046706 +0000 UTC m=+0.144507580 container init 8871a1f88f4b7930fdeac33ebf728b099402edf3de06c9c2a23bb791c67c9d30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 00:11:41 compute-0 podman[278759]: 2026-01-22 00:11:41.182095447 +0000 UTC m=+0.153556321 container start 8871a1f88f4b7930fdeac33ebf728b099402edf3de06c9c2a23bb791c67c9d30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 00:11:41 compute-0 podman[278759]: 2026-01-22 00:11:41.186496773 +0000 UTC m=+0.157957647 container attach 8871a1f88f4b7930fdeac33ebf728b099402edf3de06c9c2a23bb791c67c9d30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 00:11:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:41.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:41 compute-0 nova_compute[247516]: 2026-01-22 00:11:41.728 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:11:41 compute-0 nova_compute[247516]: 2026-01-22 00:11:41.731 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:11:41 compute-0 nova_compute[247516]: 2026-01-22 00:11:41.732 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:11:41 compute-0 nova_compute[247516]: 2026-01-22 00:11:41.732 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:11:41 compute-0 nova_compute[247516]: 2026-01-22 00:11:41.732 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:11:42 compute-0 great_proskuriakova[278775]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:11:42 compute-0 great_proskuriakova[278775]: --> relative data size: 1.0
Jan 22 00:11:42 compute-0 great_proskuriakova[278775]: --> All data devices are unavailable
Jan 22 00:11:42 compute-0 systemd[1]: libpod-8871a1f88f4b7930fdeac33ebf728b099402edf3de06c9c2a23bb791c67c9d30.scope: Deactivated successfully.
Jan 22 00:11:42 compute-0 podman[278759]: 2026-01-22 00:11:42.123913887 +0000 UTC m=+1.095374791 container died 8871a1f88f4b7930fdeac33ebf728b099402edf3de06c9c2a23bb791c67c9d30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:11:42 compute-0 ceph-mon[74318]: pgmap v1621: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-15cabfbf8dad5f3208c4fecdd53879ae467db45ef28c042bde753117019a0d6d-merged.mount: Deactivated successfully.
Jan 22 00:11:42 compute-0 podman[278759]: 2026-01-22 00:11:42.190530592 +0000 UTC m=+1.161991466 container remove 8871a1f88f4b7930fdeac33ebf728b099402edf3de06c9c2a23bb791c67c9d30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 00:11:42 compute-0 systemd[1]: libpod-conmon-8871a1f88f4b7930fdeac33ebf728b099402edf3de06c9c2a23bb791c67c9d30.scope: Deactivated successfully.
Jan 22 00:11:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:11:42 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4056509388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:11:42 compute-0 sudo[278655]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:42 compute-0 nova_compute[247516]: 2026-01-22 00:11:42.236 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:11:42 compute-0 sudo[278824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:11:42 compute-0 sudo[278824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:42 compute-0 sudo[278824]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:42 compute-0 sudo[278849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:11:42 compute-0 sudo[278849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:42 compute-0 sudo[278849]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:42 compute-0 nova_compute[247516]: 2026-01-22 00:11:42.419 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:11:42 compute-0 nova_compute[247516]: 2026-01-22 00:11:42.420 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5147MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:11:42 compute-0 nova_compute[247516]: 2026-01-22 00:11:42.421 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:11:42 compute-0 nova_compute[247516]: 2026-01-22 00:11:42.421 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:11:42 compute-0 sudo[278874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:11:42 compute-0 sudo[278874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:42 compute-0 sudo[278874]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:42 compute-0 sudo[278899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:11:42 compute-0 sudo[278899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:42 compute-0 podman[278963]: 2026-01-22 00:11:42.848572904 +0000 UTC m=+0.045943936 container create 19a1073f3ce92db33bf391e2bbf407fec6b61f39658b11096f6dafaca97d06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hofstadter, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 00:11:42 compute-0 systemd[1]: Started libpod-conmon-19a1073f3ce92db33bf391e2bbf407fec6b61f39658b11096f6dafaca97d06fb.scope.
Jan 22 00:11:42 compute-0 podman[278963]: 2026-01-22 00:11:42.82585305 +0000 UTC m=+0.023224112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:11:42 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:11:42 compute-0 podman[278963]: 2026-01-22 00:11:42.942736523 +0000 UTC m=+0.140107585 container init 19a1073f3ce92db33bf391e2bbf407fec6b61f39658b11096f6dafaca97d06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:11:42 compute-0 podman[278963]: 2026-01-22 00:11:42.950938517 +0000 UTC m=+0.148309559 container start 19a1073f3ce92db33bf391e2bbf407fec6b61f39658b11096f6dafaca97d06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hofstadter, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:11:42 compute-0 podman[278963]: 2026-01-22 00:11:42.954507358 +0000 UTC m=+0.151878430 container attach 19a1073f3ce92db33bf391e2bbf407fec6b61f39658b11096f6dafaca97d06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 00:11:42 compute-0 admiring_hofstadter[278980]: 167 167
Jan 22 00:11:42 compute-0 systemd[1]: libpod-19a1073f3ce92db33bf391e2bbf407fec6b61f39658b11096f6dafaca97d06fb.scope: Deactivated successfully.
Jan 22 00:11:42 compute-0 podman[278963]: 2026-01-22 00:11:42.957624405 +0000 UTC m=+0.154995437 container died 19a1073f3ce92db33bf391e2bbf407fec6b61f39658b11096f6dafaca97d06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hofstadter, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 00:11:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-19b416038d606daf250ac860601139ae71a46cdee047fd0590dfdafa976bc973-merged.mount: Deactivated successfully.
Jan 22 00:11:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:42.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:43 compute-0 podman[278963]: 2026-01-22 00:11:43.005473249 +0000 UTC m=+0.202844291 container remove 19a1073f3ce92db33bf391e2bbf407fec6b61f39658b11096f6dafaca97d06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Jan 22 00:11:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:43 compute-0 systemd[1]: libpod-conmon-19a1073f3ce92db33bf391e2bbf407fec6b61f39658b11096f6dafaca97d06fb.scope: Deactivated successfully.
Jan 22 00:11:43 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/4056509388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:11:43 compute-0 podman[279005]: 2026-01-22 00:11:43.188346048 +0000 UTC m=+0.046614766 container create 91adc07778d67da65d65634663469b9346125f7137c7b220415f942fdcfc5c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 00:11:43 compute-0 systemd[1]: Started libpod-conmon-91adc07778d67da65d65634663469b9346125f7137c7b220415f942fdcfc5c2d.scope.
Jan 22 00:11:43 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4faa479f4a67b3b59afd403a2a3bb6cbfb5e86eb8a1ddf12fb27b740ef7fec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4faa479f4a67b3b59afd403a2a3bb6cbfb5e86eb8a1ddf12fb27b740ef7fec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4faa479f4a67b3b59afd403a2a3bb6cbfb5e86eb8a1ddf12fb27b740ef7fec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4faa479f4a67b3b59afd403a2a3bb6cbfb5e86eb8a1ddf12fb27b740ef7fec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:11:43 compute-0 podman[279005]: 2026-01-22 00:11:43.166299464 +0000 UTC m=+0.024568202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:11:43 compute-0 podman[279005]: 2026-01-22 00:11:43.272969342 +0000 UTC m=+0.131238080 container init 91adc07778d67da65d65634663469b9346125f7137c7b220415f942fdcfc5c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 00:11:43 compute-0 podman[279005]: 2026-01-22 00:11:43.280589957 +0000 UTC m=+0.138858675 container start 91adc07778d67da65d65634663469b9346125f7137c7b220415f942fdcfc5c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_napier, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 00:11:43 compute-0 podman[279005]: 2026-01-22 00:11:43.284349294 +0000 UTC m=+0.142618072 container attach 91adc07778d67da65d65634663469b9346125f7137c7b220415f942fdcfc5c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 00:11:43 compute-0 nova_compute[247516]: 2026-01-22 00:11:43.295 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:11:43 compute-0 nova_compute[247516]: 2026-01-22 00:11:43.297 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:11:43 compute-0 nova_compute[247516]: 2026-01-22 00:11:43.297 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:11:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:43.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:43.839783) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040703839879, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 377, "num_deletes": 257, "total_data_size": 238481, "memory_usage": 245752, "flush_reason": "Manual Compaction"}
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040703844881, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 237014, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35921, "largest_seqno": 36297, "table_properties": {"data_size": 234635, "index_size": 479, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5854, "raw_average_key_size": 18, "raw_value_size": 229820, "raw_average_value_size": 724, "num_data_blocks": 19, "num_entries": 317, "num_filter_entries": 317, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769040697, "oldest_key_time": 1769040697, "file_creation_time": 1769040703, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 5246 microseconds, and 1466 cpu microseconds.
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:43.845004) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 237014 bytes OK
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:43.845079) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:43.847010) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:43.847038) EVENT_LOG_v1 {"time_micros": 1769040703847029, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:43.847060) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 235981, prev total WAL file size 235981, number of live WAL files 2.
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:43.847843) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303032' seq:72057594037927935, type:22 .. '6C6F676D0031323535' seq:0, type:0; will stop at (end)
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(231KB)], [77(10MB)]
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040703848032, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 10764738, "oldest_snapshot_seqno": -1}
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 5893 keys, 10637309 bytes, temperature: kUnknown
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040703954622, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 10637309, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10596472, "index_size": 24983, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 151923, "raw_average_key_size": 25, "raw_value_size": 10488640, "raw_average_value_size": 1779, "num_data_blocks": 1006, "num_entries": 5893, "num_filter_entries": 5893, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769040703, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:11:43 compute-0 nova_compute[247516]: 2026-01-22 00:11:43.961 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing inventories for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:43.955218) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 10637309 bytes
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:43.983423) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 100.9 rd, 99.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.0 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(90.3) write-amplify(44.9) OK, records in: 6421, records dropped: 528 output_compression: NoCompression
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:43.983467) EVENT_LOG_v1 {"time_micros": 1769040703983451, "job": 44, "event": "compaction_finished", "compaction_time_micros": 106732, "compaction_time_cpu_micros": 25873, "output_level": 6, "num_output_files": 1, "total_output_size": 10637309, "num_input_records": 6421, "num_output_records": 5893, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040703983769, "job": 44, "event": "table_file_deletion", "file_number": 79}
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040703985850, "job": 44, "event": "table_file_deletion", "file_number": 77}
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:43.847639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:43.985916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:43.985922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:43.985924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:43.985926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:11:43 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:11:43.985927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:11:44 compute-0 goofy_napier[279022]: {
Jan 22 00:11:44 compute-0 goofy_napier[279022]:     "1": [
Jan 22 00:11:44 compute-0 goofy_napier[279022]:         {
Jan 22 00:11:44 compute-0 goofy_napier[279022]:             "devices": [
Jan 22 00:11:44 compute-0 goofy_napier[279022]:                 "/dev/loop3"
Jan 22 00:11:44 compute-0 goofy_napier[279022]:             ],
Jan 22 00:11:44 compute-0 goofy_napier[279022]:             "lv_name": "ceph_lv0",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:             "lv_size": "7511998464",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:             "name": "ceph_lv0",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:             "tags": {
Jan 22 00:11:44 compute-0 goofy_napier[279022]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:                 "ceph.cluster_name": "ceph",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:                 "ceph.crush_device_class": "",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:                 "ceph.encrypted": "0",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:                 "ceph.osd_id": "1",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:                 "ceph.type": "block",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:                 "ceph.vdo": "0"
Jan 22 00:11:44 compute-0 goofy_napier[279022]:             },
Jan 22 00:11:44 compute-0 goofy_napier[279022]:             "type": "block",
Jan 22 00:11:44 compute-0 goofy_napier[279022]:             "vg_name": "ceph_vg0"
Jan 22 00:11:44 compute-0 goofy_napier[279022]:         }
Jan 22 00:11:44 compute-0 goofy_napier[279022]:     ]
Jan 22 00:11:44 compute-0 goofy_napier[279022]: }
Jan 22 00:11:44 compute-0 systemd[1]: libpod-91adc07778d67da65d65634663469b9346125f7137c7b220415f942fdcfc5c2d.scope: Deactivated successfully.
Jan 22 00:11:44 compute-0 podman[279005]: 2026-01-22 00:11:44.152429058 +0000 UTC m=+1.010697806 container died 91adc07778d67da65d65634663469b9346125f7137c7b220415f942fdcfc5c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 00:11:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f4faa479f4a67b3b59afd403a2a3bb6cbfb5e86eb8a1ddf12fb27b740ef7fec-merged.mount: Deactivated successfully.
Jan 22 00:11:44 compute-0 nova_compute[247516]: 2026-01-22 00:11:44.215 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Updating ProviderTree inventory for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 22 00:11:44 compute-0 nova_compute[247516]: 2026-01-22 00:11:44.215 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Updating inventory in ProviderTree for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 00:11:44 compute-0 podman[279005]: 2026-01-22 00:11:44.232169831 +0000 UTC m=+1.090438549 container remove 91adc07778d67da65d65634663469b9346125f7137c7b220415f942fdcfc5c2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_napier, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 00:11:44 compute-0 systemd[1]: libpod-conmon-91adc07778d67da65d65634663469b9346125f7137c7b220415f942fdcfc5c2d.scope: Deactivated successfully.
Jan 22 00:11:44 compute-0 nova_compute[247516]: 2026-01-22 00:11:44.253 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing aggregate associations for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 22 00:11:44 compute-0 sudo[278899]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:44 compute-0 nova_compute[247516]: 2026-01-22 00:11:44.285 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing trait associations for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 22 00:11:44 compute-0 nova_compute[247516]: 2026-01-22 00:11:44.319 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:11:44 compute-0 sudo[279046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:11:44 compute-0 sudo[279046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:44 compute-0 sudo[279046]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:44 compute-0 sudo[279072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:11:44 compute-0 sudo[279072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:44 compute-0 sudo[279072]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:44 compute-0 sudo[279097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:11:44 compute-0 sudo[279097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:44 compute-0 sudo[279097]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:44 compute-0 sudo[279141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:11:44 compute-0 sudo[279141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:44 compute-0 nova_compute[247516]: 2026-01-22 00:11:44.809 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:11:44 compute-0 nova_compute[247516]: 2026-01-22 00:11:44.820 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:11:44 compute-0 ceph-mon[74318]: pgmap v1622: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:44 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/4143687167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:11:44 compute-0 podman[279208]: 2026-01-22 00:11:44.917897971 +0000 UTC m=+0.046785463 container create 7581228110e58b14f148929bda14991f41ffc6d33cbc3cc571763bf760acfdce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cartwright, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 00:11:44 compute-0 systemd[1]: Started libpod-conmon-7581228110e58b14f148929bda14991f41ffc6d33cbc3cc571763bf760acfdce.scope.
Jan 22 00:11:44 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:11:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:11:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:44.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:11:44 compute-0 podman[279208]: 2026-01-22 00:11:44.899618233 +0000 UTC m=+0.028505745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:11:45 compute-0 podman[279208]: 2026-01-22 00:11:45.004680271 +0000 UTC m=+0.133567793 container init 7581228110e58b14f148929bda14991f41ffc6d33cbc3cc571763bf760acfdce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cartwright, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:11:45 compute-0 podman[279208]: 2026-01-22 00:11:45.014377241 +0000 UTC m=+0.143264753 container start 7581228110e58b14f148929bda14991f41ffc6d33cbc3cc571763bf760acfdce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cartwright, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:11:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:45 compute-0 podman[279208]: 2026-01-22 00:11:45.018715076 +0000 UTC m=+0.147602698 container attach 7581228110e58b14f148929bda14991f41ffc6d33cbc3cc571763bf760acfdce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cartwright, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 00:11:45 compute-0 inspiring_cartwright[279225]: 167 167
Jan 22 00:11:45 compute-0 systemd[1]: libpod-7581228110e58b14f148929bda14991f41ffc6d33cbc3cc571763bf760acfdce.scope: Deactivated successfully.
Jan 22 00:11:45 compute-0 podman[279208]: 2026-01-22 00:11:45.0239932 +0000 UTC m=+0.152880692 container died 7581228110e58b14f148929bda14991f41ffc6d33cbc3cc571763bf760acfdce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cartwright, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 00:11:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-097204383cc5999c49013fe4543d4cadf5167477c44194015470f6b6c9200d27-merged.mount: Deactivated successfully.
Jan 22 00:11:45 compute-0 podman[279208]: 2026-01-22 00:11:45.071313177 +0000 UTC m=+0.200200669 container remove 7581228110e58b14f148929bda14991f41ffc6d33cbc3cc571763bf760acfdce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cartwright, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:11:45 compute-0 systemd[1]: libpod-conmon-7581228110e58b14f148929bda14991f41ffc6d33cbc3cc571763bf760acfdce.scope: Deactivated successfully.
Jan 22 00:11:45 compute-0 podman[279248]: 2026-01-22 00:11:45.267309843 +0000 UTC m=+0.050198897 container create 73d703cdc512758cf4c0f0dcd97698685432a67bb7bf3b6a345cc0f1a657aa87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_gates, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:11:45 compute-0 systemd[1]: Started libpod-conmon-73d703cdc512758cf4c0f0dcd97698685432a67bb7bf3b6a345cc0f1a657aa87.scope.
Jan 22 00:11:45 compute-0 podman[279248]: 2026-01-22 00:11:45.246328212 +0000 UTC m=+0.029217286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:11:45 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25c519dd4fe1fda0204cecf75c717de25e8e72b28e49decf0e7c1e8ae7b9ae9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25c519dd4fe1fda0204cecf75c717de25e8e72b28e49decf0e7c1e8ae7b9ae9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25c519dd4fe1fda0204cecf75c717de25e8e72b28e49decf0e7c1e8ae7b9ae9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25c519dd4fe1fda0204cecf75c717de25e8e72b28e49decf0e7c1e8ae7b9ae9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:11:45 compute-0 podman[279248]: 2026-01-22 00:11:45.372942028 +0000 UTC m=+0.155831102 container init 73d703cdc512758cf4c0f0dcd97698685432a67bb7bf3b6a345cc0f1a657aa87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:11:45 compute-0 podman[279248]: 2026-01-22 00:11:45.380402509 +0000 UTC m=+0.163291573 container start 73d703cdc512758cf4c0f0dcd97698685432a67bb7bf3b6a345cc0f1a657aa87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_gates, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 00:11:45 compute-0 podman[279248]: 2026-01-22 00:11:45.385368834 +0000 UTC m=+0.168257908 container attach 73d703cdc512758cf4c0f0dcd97698685432a67bb7bf3b6a345cc0f1a657aa87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_gates, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 00:11:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:11:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:45.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:11:46 compute-0 dreamy_gates[279264]: {
Jan 22 00:11:46 compute-0 dreamy_gates[279264]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:11:46 compute-0 dreamy_gates[279264]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:11:46 compute-0 dreamy_gates[279264]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:11:46 compute-0 dreamy_gates[279264]:         "osd_id": 1,
Jan 22 00:11:46 compute-0 dreamy_gates[279264]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:11:46 compute-0 dreamy_gates[279264]:         "type": "bluestore"
Jan 22 00:11:46 compute-0 dreamy_gates[279264]:     }
Jan 22 00:11:46 compute-0 dreamy_gates[279264]: }
Jan 22 00:11:46 compute-0 systemd[1]: libpod-73d703cdc512758cf4c0f0dcd97698685432a67bb7bf3b6a345cc0f1a657aa87.scope: Deactivated successfully.
Jan 22 00:11:46 compute-0 podman[279248]: 2026-01-22 00:11:46.434574543 +0000 UTC m=+1.217463597 container died 73d703cdc512758cf4c0f0dcd97698685432a67bb7bf3b6a345cc0f1a657aa87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_gates, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 00:11:46 compute-0 systemd[1]: libpod-73d703cdc512758cf4c0f0dcd97698685432a67bb7bf3b6a345cc0f1a657aa87.scope: Consumed 1.052s CPU time.
Jan 22 00:11:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d25c519dd4fe1fda0204cecf75c717de25e8e72b28e49decf0e7c1e8ae7b9ae9-merged.mount: Deactivated successfully.
Jan 22 00:11:46 compute-0 podman[279248]: 2026-01-22 00:11:46.49899173 +0000 UTC m=+1.281880784 container remove 73d703cdc512758cf4c0f0dcd97698685432a67bb7bf3b6a345cc0f1a657aa87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_gates, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:11:46 compute-0 systemd[1]: libpod-conmon-73d703cdc512758cf4c0f0dcd97698685432a67bb7bf3b6a345cc0f1a657aa87.scope: Deactivated successfully.
Jan 22 00:11:46 compute-0 sudo[279141]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:11:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:11:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:11:46 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:11:46 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 13010be6-b122-413e-98ce-ac7cf51ff298 does not exist
Jan 22 00:11:46 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 5baa0344-5e01-4236-a76a-1a4db463290a does not exist
Jan 22 00:11:46 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 8ecaf69c-0e8a-4273-a7a3-6f1165026c90 does not exist
Jan 22 00:11:46 compute-0 sudo[279300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:11:46 compute-0 sudo[279300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:46 compute-0 sudo[279300]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:46 compute-0 sudo[279325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:11:46 compute-0 sudo[279325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:46 compute-0 sudo[279325]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:46 compute-0 ceph-mon[74318]: pgmap v1623: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:11:46 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:11:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:46.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:11:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:47.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:11:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:11:48.772 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:11:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:11:48.773 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:11:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:11:48.773 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:11:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:11:48 compute-0 ceph-mon[74318]: pgmap v1624: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:11:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:48.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:11:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:49.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:50 compute-0 ceph-mon[74318]: pgmap v1625: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:51.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:11:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:51.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:11:52 compute-0 ceph-mon[74318]: pgmap v1626: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:53.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:53.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:11:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:11:55 compute-0 ceph-mon[74318]: pgmap v1627: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.003000094s ======
Jan 22 00:11:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:55.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000094s
Jan 22 00:11:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:11:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:55.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:11:56 compute-0 ceph-mon[74318]: pgmap v1628: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:57 compute-0 nova_compute[247516]: 2026-01-22 00:11:57.010 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:11:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:11:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:57.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:11:57 compute-0 nova_compute[247516]: 2026-01-22 00:11:57.014 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:11:57 compute-0 nova_compute[247516]: 2026-01-22 00:11:57.015 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 14.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:11:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:57.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:11:58 compute-0 nova_compute[247516]: 2026-01-22 00:11:58.016 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:11:58 compute-0 sudo[279356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:11:58 compute-0 sudo[279356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:58 compute-0 sudo[279356]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:58 compute-0 sudo[279387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:11:58 compute-0 sudo[279387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:11:58 compute-0 sudo[279387]: pam_unix(sudo:session): session closed for user root
Jan 22 00:11:58 compute-0 ceph-mon[74318]: pgmap v1629: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:58 compute-0 podman[279380]: 2026-01-22 00:11:58.46433837 +0000 UTC m=+0.167718484 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 00:11:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:11:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:11:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:11:59.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:11:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:11:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:11:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:11:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:11:59.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:00 compute-0 ceph-mon[74318]: pgmap v1630: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:00 compute-0 nova_compute[247516]: 2026-01-22 00:12:00.805 247523 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 0.08 sec
Jan 22 00:12:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:12:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:01.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:12:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:01.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:02 compute-0 ceph-mon[74318]: pgmap v1631: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:02 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:12:02.778 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 00:12:02 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:12:02.780 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 00:12:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:12:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:03.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:12:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:03.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:12:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:12:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:05.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:12:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:05 compute-0 ceph-mon[74318]: pgmap v1632: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:05.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:06 compute-0 ceph-mon[74318]: pgmap v1633: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:12:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:07.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:12:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:07.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:07 compute-0 podman[279438]: 2026-01-22 00:12:07.968892221 +0000 UTC m=+0.082741836 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 00:12:08 compute-0 ceph-mon[74318]: pgmap v1634: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:12:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:09.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:12:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:12:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:12:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:12:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:12:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:12:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:12:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:09.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:12:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:11.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:11 compute-0 ceph-mon[74318]: pgmap v1635: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:12:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:11.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:12:12 compute-0 ceph-mon[74318]: pgmap v1636: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:12 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:12:12.783 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 00:12:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:13.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:13.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:12:14 compute-0 ceph-mon[74318]: pgmap v1637: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:12:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:15.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:12:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:15.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:16 compute-0 ceph-mon[74318]: pgmap v1638: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:17.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:12:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:17.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:12:18 compute-0 sudo[279463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:12:18 compute-0 sudo[279463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:18 compute-0 sudo[279463]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:18 compute-0 sudo[279488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:12:18 compute-0 sudo[279488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:18 compute-0 sudo[279488]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:12:18 compute-0 ceph-mon[74318]: pgmap v1639: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:19.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:19.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:20 compute-0 ceph-mon[74318]: pgmap v1640: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:21.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:21.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:22 compute-0 nova_compute[247516]: 2026-01-22 00:12:22.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:12:22 compute-0 nova_compute[247516]: 2026-01-22 00:12:22.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:12:22 compute-0 nova_compute[247516]: 2026-01-22 00:12:22.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:12:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:23.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:23 compute-0 ceph-mon[74318]: pgmap v1641: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:23 compute-0 nova_compute[247516]: 2026-01-22 00:12:23.299 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:12:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:23.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:12:24 compute-0 ceph-mon[74318]: pgmap v1642: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:24 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3515954200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:12:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:25.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:25.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:26 compute-0 ceph-mon[74318]: pgmap v1643: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2508880283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:12:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/707740085' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:12:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/707740085' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:12:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:12:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:27.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:12:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:27.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:28 compute-0 ceph-mon[74318]: pgmap v1644: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:12:28 compute-0 podman[279518]: 2026-01-22 00:12:28.972762614 +0000 UTC m=+0.085714068 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:12:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:12:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:29.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:12:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:29.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:29 compute-0 nova_compute[247516]: 2026-01-22 00:12:29.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:12:29 compute-0 nova_compute[247516]: 2026-01-22 00:12:29.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:12:29 compute-0 nova_compute[247516]: 2026-01-22 00:12:29.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:12:30 compute-0 ceph-mon[74318]: pgmap v1645: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:30 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1079733136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:12:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:12:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:31.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:12:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:12:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:31.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:12:32 compute-0 ceph-mon[74318]: pgmap v1646: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:32 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/180848080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:12:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:33.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:33.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:12:33 compute-0 nova_compute[247516]: 2026-01-22 00:12:33.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:12:34 compute-0 ceph-mon[74318]: pgmap v1647: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:34 compute-0 nova_compute[247516]: 2026-01-22 00:12:34.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:12:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:35.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:35.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:35 compute-0 nova_compute[247516]: 2026-01-22 00:12:35.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:12:35 compute-0 nova_compute[247516]: 2026-01-22 00:12:35.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:12:36 compute-0 ceph-mon[74318]: pgmap v1648: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:36 compute-0 nova_compute[247516]: 2026-01-22 00:12:36.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:12:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:12:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:37.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:12:37 compute-0 nova_compute[247516]: 2026-01-22 00:12:37.354 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:12:37 compute-0 nova_compute[247516]: 2026-01-22 00:12:37.355 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:12:37 compute-0 nova_compute[247516]: 2026-01-22 00:12:37.355 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:12:37 compute-0 nova_compute[247516]: 2026-01-22 00:12:37.355 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:12:37 compute-0 nova_compute[247516]: 2026-01-22 00:12:37.356 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:12:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:37.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:12:37 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4189443919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:12:37 compute-0 nova_compute[247516]: 2026-01-22 00:12:37.786 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:12:37 compute-0 nova_compute[247516]: 2026-01-22 00:12:37.962 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:12:37 compute-0 nova_compute[247516]: 2026-01-22 00:12:37.963 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5150MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:12:37 compute-0 nova_compute[247516]: 2026-01-22 00:12:37.963 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:12:37 compute-0 nova_compute[247516]: 2026-01-22 00:12:37.964 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:12:38 compute-0 ceph-mon[74318]: pgmap v1649: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:38 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/4189443919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:12:38 compute-0 nova_compute[247516]: 2026-01-22 00:12:38.256 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:12:38 compute-0 nova_compute[247516]: 2026-01-22 00:12:38.257 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:12:38 compute-0 nova_compute[247516]: 2026-01-22 00:12:38.257 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:12:38 compute-0 nova_compute[247516]: 2026-01-22 00:12:38.307 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:12:38 compute-0 sudo[279592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:12:38 compute-0 sudo[279592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:38 compute-0 sudo[279592]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:38 compute-0 sudo[279623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:12:38 compute-0 sudo[279623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:38 compute-0 sudo[279623]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:38 compute-0 podman[279616]: 2026-01-22 00:12:38.687188346 +0000 UTC m=+0.068263767 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 00:12:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:12:38 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3904167747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:12:38 compute-0 nova_compute[247516]: 2026-01-22 00:12:38.796 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:12:38 compute-0 nova_compute[247516]: 2026-01-22 00:12:38.802 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:12:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:12:38 compute-0 nova_compute[247516]: 2026-01-22 00:12:38.892 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:12:38 compute-0 nova_compute[247516]: 2026-01-22 00:12:38.893 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:12:38 compute-0 nova_compute[247516]: 2026-01-22 00:12:38.894 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:12:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:39.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:12:39
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'backups', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'images', 'vms']
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:12:39 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3904167747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:12:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:12:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:39.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:39 compute-0 nova_compute[247516]: 2026-01-22 00:12:39.896 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:12:40 compute-0 ceph-mon[74318]: pgmap v1650: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:40 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:12:40.820 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 00:12:40 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:12:40.822 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 00:12:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:41.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:12:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:41.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:12:42 compute-0 ceph-mon[74318]: pgmap v1651: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:43.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:43.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:12:44 compute-0 ceph-mon[74318]: pgmap v1652: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:12:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:45.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:12:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:12:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:45.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:12:46 compute-0 ceph-mon[74318]: pgmap v1653: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:46 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:12:46.826 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 00:12:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:47.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:47 compute-0 sudo[279667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:12:47 compute-0 sudo[279667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:47 compute-0 sudo[279667]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:47 compute-0 sudo[279692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:12:47 compute-0 sudo[279692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:47 compute-0 sudo[279692]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:47 compute-0 sudo[279717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:12:47 compute-0 sudo[279717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:47 compute-0 sudo[279717]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:47 compute-0 sudo[279742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 00:12:47 compute-0 sudo[279742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:12:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:47.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:12:47 compute-0 sudo[279742]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:12:47 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:12:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:12:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:12:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:12:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:12:47 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 6cdf064d-f4d1-4460-ae21-7a70c112a5b6 does not exist
Jan 22 00:12:47 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev a10170b4-84f0-4309-8019-c750a976eb16 does not exist
Jan 22 00:12:47 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 5512fa03-5099-427e-a2d2-cf44b40be4df does not exist
Jan 22 00:12:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:12:47 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:12:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:12:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:12:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:12:47 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:12:48 compute-0 sudo[279799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:12:48 compute-0 sudo[279799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:48 compute-0 sudo[279799]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:48 compute-0 sudo[279824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:12:48 compute-0 sudo[279824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:48 compute-0 sudo[279824]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:48 compute-0 sudo[279849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:12:48 compute-0 sudo[279849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:48 compute-0 sudo[279849]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:48 compute-0 sudo[279874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:12:48 compute-0 sudo[279874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:48 compute-0 podman[279941]: 2026-01-22 00:12:48.560949847 +0000 UTC m=+0.042165507 container create 559eb858e350c3301d4defb4a583921fa77fbd0b57bc6085eff5cd63d90e3283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:12:48 compute-0 systemd[1]: Started libpod-conmon-559eb858e350c3301d4defb4a583921fa77fbd0b57bc6085eff5cd63d90e3283.scope.
Jan 22 00:12:48 compute-0 podman[279941]: 2026-01-22 00:12:48.544451466 +0000 UTC m=+0.025667156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:12:48 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:12:48 compute-0 podman[279941]: 2026-01-22 00:12:48.664956212 +0000 UTC m=+0.146171892 container init 559eb858e350c3301d4defb4a583921fa77fbd0b57bc6085eff5cd63d90e3283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:12:48 compute-0 podman[279941]: 2026-01-22 00:12:48.678802971 +0000 UTC m=+0.160018631 container start 559eb858e350c3301d4defb4a583921fa77fbd0b57bc6085eff5cd63d90e3283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mestorf, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 00:12:48 compute-0 podman[279941]: 2026-01-22 00:12:48.682989591 +0000 UTC m=+0.164205281 container attach 559eb858e350c3301d4defb4a583921fa77fbd0b57bc6085eff5cd63d90e3283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mestorf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:12:48 compute-0 stoic_mestorf[279957]: 167 167
Jan 22 00:12:48 compute-0 systemd[1]: libpod-559eb858e350c3301d4defb4a583921fa77fbd0b57bc6085eff5cd63d90e3283.scope: Deactivated successfully.
Jan 22 00:12:48 compute-0 podman[279941]: 2026-01-22 00:12:48.69004168 +0000 UTC m=+0.171257360 container died 559eb858e350c3301d4defb4a583921fa77fbd0b57bc6085eff5cd63d90e3283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 00:12:48 compute-0 ceph-mon[74318]: pgmap v1654: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:48 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:12:48 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:12:48 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:12:48 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:12:48 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:12:48 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:12:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-514d01648a3de51bd23bb0eb1b6cf458db96e582af9e89dc0d1b05e191bd34d1-merged.mount: Deactivated successfully.
Jan 22 00:12:48 compute-0 podman[279941]: 2026-01-22 00:12:48.739367499 +0000 UTC m=+0.220583179 container remove 559eb858e350c3301d4defb4a583921fa77fbd0b57bc6085eff5cd63d90e3283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:12:48 compute-0 systemd[1]: libpod-conmon-559eb858e350c3301d4defb4a583921fa77fbd0b57bc6085eff5cd63d90e3283.scope: Deactivated successfully.
Jan 22 00:12:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:12:48.774 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:12:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:12:48.775 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:12:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:12:48.776 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:12:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:12:48 compute-0 podman[279979]: 2026-01-22 00:12:48.928901135 +0000 UTC m=+0.045235723 container create 7979282f8940595d641682784d8b8819f675d2f4132d15247f9d637a14ba411f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:12:48 compute-0 systemd[1]: Started libpod-conmon-7979282f8940595d641682784d8b8819f675d2f4132d15247f9d637a14ba411f.scope.
Jan 22 00:12:49 compute-0 podman[279979]: 2026-01-22 00:12:48.911292679 +0000 UTC m=+0.027627287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:12:49 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674a921262ed22e5583eeeb5ad8c6c33adb1641c9589d7ba1418e71bdb21aa05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674a921262ed22e5583eeeb5ad8c6c33adb1641c9589d7ba1418e71bdb21aa05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674a921262ed22e5583eeeb5ad8c6c33adb1641c9589d7ba1418e71bdb21aa05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674a921262ed22e5583eeeb5ad8c6c33adb1641c9589d7ba1418e71bdb21aa05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674a921262ed22e5583eeeb5ad8c6c33adb1641c9589d7ba1418e71bdb21aa05/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:12:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:49.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:49 compute-0 podman[279979]: 2026-01-22 00:12:49.195996507 +0000 UTC m=+0.312331195 container init 7979282f8940595d641682784d8b8819f675d2f4132d15247f9d637a14ba411f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:12:49 compute-0 podman[279979]: 2026-01-22 00:12:49.212020663 +0000 UTC m=+0.328355251 container start 7979282f8940595d641682784d8b8819f675d2f4132d15247f9d637a14ba411f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 00:12:49 compute-0 podman[279979]: 2026-01-22 00:12:49.407597007 +0000 UTC m=+0.523931645 container attach 7979282f8940595d641682784d8b8819f675d2f4132d15247f9d637a14ba411f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:12:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:49.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:50 compute-0 kind_darwin[279995]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:12:50 compute-0 kind_darwin[279995]: --> relative data size: 1.0
Jan 22 00:12:50 compute-0 kind_darwin[279995]: --> All data devices are unavailable
Jan 22 00:12:50 compute-0 systemd[1]: libpod-7979282f8940595d641682784d8b8819f675d2f4132d15247f9d637a14ba411f.scope: Deactivated successfully.
Jan 22 00:12:50 compute-0 podman[279979]: 2026-01-22 00:12:50.093415009 +0000 UTC m=+1.209749627 container died 7979282f8940595d641682784d8b8819f675d2f4132d15247f9d637a14ba411f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 00:12:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-674a921262ed22e5583eeeb5ad8c6c33adb1641c9589d7ba1418e71bdb21aa05-merged.mount: Deactivated successfully.
Jan 22 00:12:50 compute-0 podman[279979]: 2026-01-22 00:12:50.164229635 +0000 UTC m=+1.280564243 container remove 7979282f8940595d641682784d8b8819f675d2f4132d15247f9d637a14ba411f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:12:50 compute-0 systemd[1]: libpod-conmon-7979282f8940595d641682784d8b8819f675d2f4132d15247f9d637a14ba411f.scope: Deactivated successfully.
Jan 22 00:12:50 compute-0 sudo[279874]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:50 compute-0 sudo[280025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:12:50 compute-0 sudo[280025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:50 compute-0 sudo[280025]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:50 compute-0 sudo[280050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:12:50 compute-0 sudo[280050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:50 compute-0 sudo[280050]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:50 compute-0 sudo[280075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:12:50 compute-0 sudo[280075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:50 compute-0 sudo[280075]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:50 compute-0 sudo[280100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:12:50 compute-0 sudo[280100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:50 compute-0 ceph-mon[74318]: pgmap v1655: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:50 compute-0 podman[280166]: 2026-01-22 00:12:50.919123169 +0000 UTC m=+0.061085585 container create 168ac6936d7bb267792571f161012b7d8f608907baaf792ae3a6fa33f3d36c93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:12:50 compute-0 systemd[1]: Started libpod-conmon-168ac6936d7bb267792571f161012b7d8f608907baaf792ae3a6fa33f3d36c93.scope.
Jan 22 00:12:50 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:12:50 compute-0 podman[280166]: 2026-01-22 00:12:50.991650608 +0000 UTC m=+0.133613024 container init 168ac6936d7bb267792571f161012b7d8f608907baaf792ae3a6fa33f3d36c93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 00:12:50 compute-0 podman[280166]: 2026-01-22 00:12:50.900299216 +0000 UTC m=+0.042261642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:12:50 compute-0 podman[280166]: 2026-01-22 00:12:50.99881662 +0000 UTC m=+0.140779066 container start 168ac6936d7bb267792571f161012b7d8f608907baaf792ae3a6fa33f3d36c93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:12:51 compute-0 jolly_banach[280182]: 167 167
Jan 22 00:12:51 compute-0 podman[280166]: 2026-01-22 00:12:51.004209497 +0000 UTC m=+0.146171943 container attach 168ac6936d7bb267792571f161012b7d8f608907baaf792ae3a6fa33f3d36c93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:12:51 compute-0 systemd[1]: libpod-168ac6936d7bb267792571f161012b7d8f608907baaf792ae3a6fa33f3d36c93.scope: Deactivated successfully.
Jan 22 00:12:51 compute-0 conmon[280182]: conmon 168ac6936d7bb2677925 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-168ac6936d7bb267792571f161012b7d8f608907baaf792ae3a6fa33f3d36c93.scope/container/memory.events
Jan 22 00:12:51 compute-0 podman[280166]: 2026-01-22 00:12:51.006385285 +0000 UTC m=+0.148347691 container died 168ac6936d7bb267792571f161012b7d8f608907baaf792ae3a6fa33f3d36c93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:12:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-802c37cb389db28c65839f9e27f1374f55187a9ad45ebd9341f7a363d665bb59-merged.mount: Deactivated successfully.
Jan 22 00:12:51 compute-0 podman[280166]: 2026-01-22 00:12:51.050329757 +0000 UTC m=+0.192292183 container remove 168ac6936d7bb267792571f161012b7d8f608907baaf792ae3a6fa33f3d36c93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:12:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:51 compute-0 systemd[1]: libpod-conmon-168ac6936d7bb267792571f161012b7d8f608907baaf792ae3a6fa33f3d36c93.scope: Deactivated successfully.
Jan 22 00:12:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:12:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:51.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:12:51 compute-0 podman[280206]: 2026-01-22 00:12:51.236299833 +0000 UTC m=+0.048985960 container create 245a2b9b220ad0006439d18a95c0035b5a50036b34addc33e438c9bb0acfdad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gould, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 00:12:51 compute-0 systemd[1]: Started libpod-conmon-245a2b9b220ad0006439d18a95c0035b5a50036b34addc33e438c9bb0acfdad9.scope.
Jan 22 00:12:51 compute-0 podman[280206]: 2026-01-22 00:12:51.215026713 +0000 UTC m=+0.027712840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:12:51 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:12:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2301b5f8426c5d50d0cbf0e4410f285631b1601d8687447680f5d1171f738c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:12:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2301b5f8426c5d50d0cbf0e4410f285631b1601d8687447680f5d1171f738c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:12:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2301b5f8426c5d50d0cbf0e4410f285631b1601d8687447680f5d1171f738c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:12:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2301b5f8426c5d50d0cbf0e4410f285631b1601d8687447680f5d1171f738c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:12:51 compute-0 podman[280206]: 2026-01-22 00:12:51.346290053 +0000 UTC m=+0.158976170 container init 245a2b9b220ad0006439d18a95c0035b5a50036b34addc33e438c9bb0acfdad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gould, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 00:12:51 compute-0 podman[280206]: 2026-01-22 00:12:51.354664463 +0000 UTC m=+0.167350580 container start 245a2b9b220ad0006439d18a95c0035b5a50036b34addc33e438c9bb0acfdad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:12:51 compute-0 podman[280206]: 2026-01-22 00:12:51.359575485 +0000 UTC m=+0.172261602 container attach 245a2b9b220ad0006439d18a95c0035b5a50036b34addc33e438c9bb0acfdad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gould, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 00:12:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:51.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 00:12:52 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8455 writes, 36K keys, 8452 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 8455 writes, 8452 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1547 writes, 6861 keys, 1545 commit groups, 1.0 writes per commit group, ingest: 10.42 MB, 0.02 MB/s
                                           Interval WAL: 1547 writes, 1545 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     90.5      0.55              0.20        22    0.025       0      0       0.0       0.0
                                             L6      1/0   10.14 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.8    115.9     95.6      1.94              0.77        21    0.092    111K    12K       0.0       0.0
                                            Sum      1/0   10.14 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.8     90.4     94.5      2.49              0.96        43    0.058    111K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.4     76.4     78.8      0.73              0.27        10    0.073     31K   3090       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    115.9     95.6      1.94              0.77        21    0.092    111K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     91.2      0.54              0.20        21    0.026       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.048, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.23 GB write, 0.08 MB/s write, 0.22 GB read, 0.08 MB/s read, 2.5 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559f1db2f1f0#2 capacity: 304.00 MB usage: 26.11 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000336 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1485,25.25 MB,8.30587%) FilterBlock(44,315.73 KB,0.101426%) IndexBlock(44,566.09 KB,0.181851%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 00:12:52 compute-0 serene_gould[280223]: {
Jan 22 00:12:52 compute-0 serene_gould[280223]:     "1": [
Jan 22 00:12:52 compute-0 serene_gould[280223]:         {
Jan 22 00:12:52 compute-0 serene_gould[280223]:             "devices": [
Jan 22 00:12:52 compute-0 serene_gould[280223]:                 "/dev/loop3"
Jan 22 00:12:52 compute-0 serene_gould[280223]:             ],
Jan 22 00:12:52 compute-0 serene_gould[280223]:             "lv_name": "ceph_lv0",
Jan 22 00:12:52 compute-0 serene_gould[280223]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:12:52 compute-0 serene_gould[280223]:             "lv_size": "7511998464",
Jan 22 00:12:52 compute-0 serene_gould[280223]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:12:52 compute-0 serene_gould[280223]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:12:52 compute-0 serene_gould[280223]:             "name": "ceph_lv0",
Jan 22 00:12:52 compute-0 serene_gould[280223]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:12:52 compute-0 serene_gould[280223]:             "tags": {
Jan 22 00:12:52 compute-0 serene_gould[280223]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:12:52 compute-0 serene_gould[280223]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:12:52 compute-0 serene_gould[280223]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:12:52 compute-0 serene_gould[280223]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:12:52 compute-0 serene_gould[280223]:                 "ceph.cluster_name": "ceph",
Jan 22 00:12:52 compute-0 serene_gould[280223]:                 "ceph.crush_device_class": "",
Jan 22 00:12:52 compute-0 serene_gould[280223]:                 "ceph.encrypted": "0",
Jan 22 00:12:52 compute-0 serene_gould[280223]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:12:52 compute-0 serene_gould[280223]:                 "ceph.osd_id": "1",
Jan 22 00:12:52 compute-0 serene_gould[280223]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:12:52 compute-0 serene_gould[280223]:                 "ceph.type": "block",
Jan 22 00:12:52 compute-0 serene_gould[280223]:                 "ceph.vdo": "0"
Jan 22 00:12:52 compute-0 serene_gould[280223]:             },
Jan 22 00:12:52 compute-0 serene_gould[280223]:             "type": "block",
Jan 22 00:12:52 compute-0 serene_gould[280223]:             "vg_name": "ceph_vg0"
Jan 22 00:12:52 compute-0 serene_gould[280223]:         }
Jan 22 00:12:52 compute-0 serene_gould[280223]:     ]
Jan 22 00:12:52 compute-0 serene_gould[280223]: }
Jan 22 00:12:52 compute-0 systemd[1]: libpod-245a2b9b220ad0006439d18a95c0035b5a50036b34addc33e438c9bb0acfdad9.scope: Deactivated successfully.
Jan 22 00:12:52 compute-0 podman[280206]: 2026-01-22 00:12:52.232028724 +0000 UTC m=+1.044714851 container died 245a2b9b220ad0006439d18a95c0035b5a50036b34addc33e438c9bb0acfdad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:12:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2301b5f8426c5d50d0cbf0e4410f285631b1601d8687447680f5d1171f738c1-merged.mount: Deactivated successfully.
Jan 22 00:12:52 compute-0 podman[280206]: 2026-01-22 00:12:52.296857914 +0000 UTC m=+1.109544051 container remove 245a2b9b220ad0006439d18a95c0035b5a50036b34addc33e438c9bb0acfdad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gould, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:12:52 compute-0 systemd[1]: libpod-conmon-245a2b9b220ad0006439d18a95c0035b5a50036b34addc33e438c9bb0acfdad9.scope: Deactivated successfully.
Jan 22 00:12:52 compute-0 sudo[280100]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:52 compute-0 sudo[280245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:12:52 compute-0 sudo[280245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:52 compute-0 sudo[280245]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:52 compute-0 sudo[280270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:12:52 compute-0 sudo[280270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:52 compute-0 sudo[280270]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:52 compute-0 sudo[280295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:12:52 compute-0 sudo[280295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:52 compute-0 sudo[280295]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:52 compute-0 sudo[280320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:12:52 compute-0 sudo[280320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:52 compute-0 ceph-mon[74318]: pgmap v1656: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:12:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:53.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:12:53 compute-0 podman[280384]: 2026-01-22 00:12:53.101846331 +0000 UTC m=+0.047469343 container create d4693cd3c0fe3e431824ef08df88136de86ead89f9f261f4e4ab3025c2885953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:12:53 compute-0 systemd[1]: Started libpod-conmon-d4693cd3c0fe3e431824ef08df88136de86ead89f9f261f4e4ab3025c2885953.scope.
Jan 22 00:12:53 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:12:53 compute-0 podman[280384]: 2026-01-22 00:12:53.08084075 +0000 UTC m=+0.026463852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:12:53 compute-0 podman[280384]: 2026-01-22 00:12:53.185247897 +0000 UTC m=+0.130870979 container init d4693cd3c0fe3e431824ef08df88136de86ead89f9f261f4e4ab3025c2885953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:12:53 compute-0 podman[280384]: 2026-01-22 00:12:53.195454273 +0000 UTC m=+0.141077295 container start d4693cd3c0fe3e431824ef08df88136de86ead89f9f261f4e4ab3025c2885953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:12:53 compute-0 podman[280384]: 2026-01-22 00:12:53.199082176 +0000 UTC m=+0.144705218 container attach d4693cd3c0fe3e431824ef08df88136de86ead89f9f261f4e4ab3025c2885953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 00:12:53 compute-0 angry_zhukovsky[280399]: 167 167
Jan 22 00:12:53 compute-0 systemd[1]: libpod-d4693cd3c0fe3e431824ef08df88136de86ead89f9f261f4e4ab3025c2885953.scope: Deactivated successfully.
Jan 22 00:12:53 compute-0 podman[280384]: 2026-01-22 00:12:53.200750678 +0000 UTC m=+0.146373730 container died d4693cd3c0fe3e431824ef08df88136de86ead89f9f261f4e4ab3025c2885953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:12:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfdb8fe87e5750399a22c9c76ec8fe3c0dd22542dd6f5911c0e873ff72cff869-merged.mount: Deactivated successfully.
Jan 22 00:12:53 compute-0 podman[280384]: 2026-01-22 00:12:53.247445775 +0000 UTC m=+0.193068827 container remove d4693cd3c0fe3e431824ef08df88136de86ead89f9f261f4e4ab3025c2885953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 00:12:53 compute-0 systemd[1]: libpod-conmon-d4693cd3c0fe3e431824ef08df88136de86ead89f9f261f4e4ab3025c2885953.scope: Deactivated successfully.
Jan 22 00:12:53 compute-0 podman[280425]: 2026-01-22 00:12:53.432985608 +0000 UTC m=+0.042415627 container create 003c6325749821584f0d9e0a392b1f5280d20038246a7e05b99b44e5d1c628cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chatelet, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 22 00:12:53 compute-0 systemd[1]: Started libpod-conmon-003c6325749821584f0d9e0a392b1f5280d20038246a7e05b99b44e5d1c628cc.scope.
Jan 22 00:12:53 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:12:53 compute-0 podman[280425]: 2026-01-22 00:12:53.415863577 +0000 UTC m=+0.025293606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468b37fe7fc97b61af0baa4a6b01543eb9467ea5601e9c59a30d491de3fcae12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468b37fe7fc97b61af0baa4a6b01543eb9467ea5601e9c59a30d491de3fcae12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468b37fe7fc97b61af0baa4a6b01543eb9467ea5601e9c59a30d491de3fcae12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468b37fe7fc97b61af0baa4a6b01543eb9467ea5601e9c59a30d491de3fcae12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:12:53 compute-0 podman[280425]: 2026-01-22 00:12:53.534694721 +0000 UTC m=+0.144124830 container init 003c6325749821584f0d9e0a392b1f5280d20038246a7e05b99b44e5d1c628cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chatelet, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 00:12:53 compute-0 podman[280425]: 2026-01-22 00:12:53.544203255 +0000 UTC m=+0.153633274 container start 003c6325749821584f0d9e0a392b1f5280d20038246a7e05b99b44e5d1c628cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 00:12:53 compute-0 podman[280425]: 2026-01-22 00:12:53.547478067 +0000 UTC m=+0.156908186 container attach 003c6325749821584f0d9e0a392b1f5280d20038246a7e05b99b44e5d1c628cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chatelet, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 00:12:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:53.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:12:54 compute-0 epic_chatelet[280443]: {
Jan 22 00:12:54 compute-0 epic_chatelet[280443]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:12:54 compute-0 epic_chatelet[280443]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:12:54 compute-0 epic_chatelet[280443]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:12:54 compute-0 epic_chatelet[280443]:         "osd_id": 1,
Jan 22 00:12:54 compute-0 epic_chatelet[280443]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:12:54 compute-0 epic_chatelet[280443]:         "type": "bluestore"
Jan 22 00:12:54 compute-0 epic_chatelet[280443]:     }
Jan 22 00:12:54 compute-0 epic_chatelet[280443]: }
Jan 22 00:12:54 compute-0 systemd[1]: libpod-003c6325749821584f0d9e0a392b1f5280d20038246a7e05b99b44e5d1c628cc.scope: Deactivated successfully.
Jan 22 00:12:54 compute-0 podman[280425]: 2026-01-22 00:12:54.524396805 +0000 UTC m=+1.133826834 container died 003c6325749821584f0d9e0a392b1f5280d20038246a7e05b99b44e5d1c628cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 00:12:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-468b37fe7fc97b61af0baa4a6b01543eb9467ea5601e9c59a30d491de3fcae12-merged.mount: Deactivated successfully.
Jan 22 00:12:54 compute-0 podman[280425]: 2026-01-22 00:12:54.588391989 +0000 UTC m=+1.197822018 container remove 003c6325749821584f0d9e0a392b1f5280d20038246a7e05b99b44e5d1c628cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:12:54 compute-0 systemd[1]: libpod-conmon-003c6325749821584f0d9e0a392b1f5280d20038246a7e05b99b44e5d1c628cc.scope: Deactivated successfully.
Jan 22 00:12:54 compute-0 sudo[280320]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:12:54 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:12:54 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:12:54 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 2e44f23b-c2e7-4f29-9bb1-ec04f62ef6e7 does not exist
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev e372f430-8bbe-45ee-8647-a3669d87eac7 does not exist
Jan 22 00:12:54 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev c3587a31-94ff-4df5-96f8-f6cece9f5e54 does not exist
Jan 22 00:12:54 compute-0 sudo[280476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:12:54 compute-0 sudo[280476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:54 compute-0 sudo[280476]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:54 compute-0 sudo[280501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:12:54 compute-0 sudo[280501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:54 compute-0 sudo[280501]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:54 compute-0 ceph-mon[74318]: pgmap v1657: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:12:54 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:12:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:55.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:12:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:55.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:12:56 compute-0 ceph-mon[74318]: pgmap v1658: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:57.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:12:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:12:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:57.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:12:58 compute-0 sudo[280528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:12:58 compute-0 sudo[280528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:58 compute-0 sudo[280528]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:58 compute-0 sudo[280553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:12:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:12:58 compute-0 sudo[280553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:12:58 compute-0 sudo[280553]: pam_unix(sudo:session): session closed for user root
Jan 22 00:12:58 compute-0 ceph-mon[74318]: pgmap v1659: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:12:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:12:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:12:59.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:12:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:12:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:12:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:12:59.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:00 compute-0 podman[280579]: 2026-01-22 00:13:00.005532659 +0000 UTC m=+0.112312782 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 00:13:00 compute-0 ceph-mon[74318]: pgmap v1660: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:01.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:13:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:01.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:13:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:03.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:03 compute-0 ceph-mon[74318]: pgmap v1661: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:03.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:13:04 compute-0 ceph-mon[74318]: pgmap v1662: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:13:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:05.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:13:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:05.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:07.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:07 compute-0 ceph-mon[74318]: pgmap v1663: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:07.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:08 compute-0 ceph-mon[74318]: pgmap v1664: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:13:08 compute-0 podman[280611]: 2026-01-22 00:13:08.958961967 +0000 UTC m=+0.062874329 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 00:13:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:09.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:13:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:13:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:13:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:13:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:13:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:13:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:09.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:10 compute-0 ceph-mon[74318]: pgmap v1665: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:13:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:11.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:13:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:13:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:11.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:13:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:13:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:13.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:13:13 compute-0 ceph-mon[74318]: pgmap v1666: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:13:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:13.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:13:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:13:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:13:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:15.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:13:15 compute-0 ceph-mon[74318]: pgmap v1667: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:15.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:16 compute-0 ceph-mon[74318]: pgmap v1668: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:13:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:17.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:13:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:13:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:17.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:13:18 compute-0 ceph-mon[74318]: pgmap v1669: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:13:18 compute-0 sudo[280635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:13:18 compute-0 sudo[280635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:18 compute-0 sudo[280635]: pam_unix(sudo:session): session closed for user root
Jan 22 00:13:19 compute-0 sudo[280660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:13:19 compute-0 sudo[280660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:19 compute-0 sudo[280660]: pam_unix(sudo:session): session closed for user root
Jan 22 00:13:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:19.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:19.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:20 compute-0 ceph-mon[74318]: pgmap v1670: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:13:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:21.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:13:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:21.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:22 compute-0 ceph-mon[74318]: pgmap v1671: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:13:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:23.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:13:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:23.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/186920260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:13:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:13:24 compute-0 ceph-mon[74318]: pgmap v1672: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:24 compute-0 nova_compute[247516]: 2026-01-22 00:13:24.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:13:24 compute-0 nova_compute[247516]: 2026-01-22 00:13:24.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:13:24 compute-0 nova_compute[247516]: 2026-01-22 00:13:24.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:13:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:25.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:25 compute-0 nova_compute[247516]: 2026-01-22 00:13:25.247 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:13:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:13:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:25.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:13:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2671853269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:13:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/506596332' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:13:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/506596332' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:13:26 compute-0 ceph-mon[74318]: pgmap v1673: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:13:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:27.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:13:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:27.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:13:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:29.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:29 compute-0 ceph-mon[74318]: pgmap v1674: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:29.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:30 compute-0 podman[280691]: 2026-01-22 00:13:30.9782239 +0000 UTC m=+0.091581169 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:13:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:31.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:31 compute-0 ceph-mon[74318]: pgmap v1675: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3285284338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:13:31 compute-0 nova_compute[247516]: 2026-01-22 00:13:31.242 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:13:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:31.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:31 compute-0 nova_compute[247516]: 2026-01-22 00:13:31.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:13:31 compute-0 nova_compute[247516]: 2026-01-22 00:13:31.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:13:32 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:13:32.200 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 00:13:32 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:13:32.202 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 00:13:32 compute-0 ceph-mon[74318]: pgmap v1676: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:32 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1179838171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:13:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:33.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:33.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:13:33 compute-0 nova_compute[247516]: 2026-01-22 00:13:33.987 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:13:34 compute-0 ceph-mon[74318]: pgmap v1677: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:34 compute-0 nova_compute[247516]: 2026-01-22 00:13:34.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:13:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:35.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:35.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:36 compute-0 nova_compute[247516]: 2026-01-22 00:13:36.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:13:36 compute-0 nova_compute[247516]: 2026-01-22 00:13:36.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:13:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:37 compute-0 ceph-mon[74318]: pgmap v1678: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:37.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:13:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:37.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:13:37 compute-0 nova_compute[247516]: 2026-01-22 00:13:37.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:13:37 compute-0 nova_compute[247516]: 2026-01-22 00:13:37.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:13:38 compute-0 ceph-mon[74318]: pgmap v1679: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:38 compute-0 nova_compute[247516]: 2026-01-22 00:13:38.579 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:13:38 compute-0 nova_compute[247516]: 2026-01-22 00:13:38.580 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:13:38 compute-0 nova_compute[247516]: 2026-01-22 00:13:38.581 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:13:38 compute-0 nova_compute[247516]: 2026-01-22 00:13:38.581 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:13:38 compute-0 nova_compute[247516]: 2026-01-22 00:13:38.582 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:13:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:13:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:13:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2395782091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:13:39 compute-0 nova_compute[247516]: 2026-01-22 00:13:39.054 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:39 compute-0 sudo[280743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:13:39 compute-0 sudo[280743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:39 compute-0 sudo[280743]: pam_unix(sudo:session): session closed for user root
Jan 22 00:13:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:39.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:39 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:13:39.205 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 00:13:39 compute-0 sudo[280775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:13:39 compute-0 podman[280768]: 2026-01-22 00:13:39.208522519 +0000 UTC m=+0.052654883 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 00:13:39 compute-0 sudo[280775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:39 compute-0 sudo[280775]: pam_unix(sudo:session): session closed for user root
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:13:39 compute-0 nova_compute[247516]: 2026-01-22 00:13:39.278 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:13:39 compute-0 nova_compute[247516]: 2026-01-22 00:13:39.279 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5164MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:13:39 compute-0 nova_compute[247516]: 2026-01-22 00:13:39.280 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:13:39 compute-0 nova_compute[247516]: 2026-01-22 00:13:39.280 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:13:39
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'volumes', 'default.rgw.log', '.rgw.root']
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:13:39 compute-0 nova_compute[247516]: 2026-01-22 00:13:39.432 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:13:39 compute-0 nova_compute[247516]: 2026-01-22 00:13:39.433 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:13:39 compute-0 nova_compute[247516]: 2026-01-22 00:13:39.433 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:13:39 compute-0 nova_compute[247516]: 2026-01-22 00:13:39.473 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:13:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:13:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:13:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:39.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:13:39 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2395782091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:13:40 compute-0 nova_compute[247516]: 2026-01-22 00:13:40.363 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.890s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:13:40 compute-0 nova_compute[247516]: 2026-01-22 00:13:40.372 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:13:40 compute-0 nova_compute[247516]: 2026-01-22 00:13:40.769 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:13:40 compute-0 nova_compute[247516]: 2026-01-22 00:13:40.772 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:13:40 compute-0 nova_compute[247516]: 2026-01-22 00:13:40.772 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:13:40 compute-0 ceph-mon[74318]: pgmap v1680: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:40 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2931122081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:13:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:41.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:41.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:41 compute-0 nova_compute[247516]: 2026-01-22 00:13:41.774 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:13:43 compute-0 ceph-mon[74318]: pgmap v1681: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:13:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:43.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:13:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:43.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:13:44 compute-0 ceph-mon[74318]: pgmap v1682: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:45.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:45.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:46 compute-0 ceph-mon[74318]: pgmap v1683: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:13:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:47.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:13:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:13:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:47.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:13:48 compute-0 ceph-mon[74318]: pgmap v1684: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:13:48.775 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:13:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:13:48.776 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:13:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:13:48.776 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:13:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:13:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:13:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:49.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:13:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:13:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:49.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:13:50 compute-0 ceph-mon[74318]: pgmap v1685: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:13:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:51.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:13:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:13:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:51.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:13:52 compute-0 ceph-mon[74318]: pgmap v1686: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:53.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:53.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:13:54 compute-0 ceph-mon[74318]: pgmap v1687: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:13:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:13:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:13:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:55.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:13:55 compute-0 sudo[280844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:13:55 compute-0 sudo[280844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:55 compute-0 sudo[280844]: pam_unix(sudo:session): session closed for user root
Jan 22 00:13:55 compute-0 sudo[280869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:13:55 compute-0 sudo[280869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:55 compute-0 sudo[280869]: pam_unix(sudo:session): session closed for user root
Jan 22 00:13:55 compute-0 sudo[280894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:13:55 compute-0 sudo[280894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:55 compute-0 sudo[280894]: pam_unix(sudo:session): session closed for user root
Jan 22 00:13:55 compute-0 sudo[280919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 00:13:55 compute-0 sudo[280919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:13:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:55.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:13:55 compute-0 sudo[280919]: pam_unix(sudo:session): session closed for user root
Jan 22 00:13:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:13:56 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:13:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:13:56 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:13:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:13:56 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:13:56 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev f562834b-fb53-4d75-ae33-6e318eb47f6a does not exist
Jan 22 00:13:56 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev eef3d5c3-141e-4014-943a-51fdbfa085dc does not exist
Jan 22 00:13:56 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev bc02c596-025c-46c8-9974-7aa9090520e5 does not exist
Jan 22 00:13:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:13:56 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:13:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:13:56 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:13:56 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:13:56 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:13:56 compute-0 sudo[280975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:13:56 compute-0 sudo[280975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:56 compute-0 sudo[280975]: pam_unix(sudo:session): session closed for user root
Jan 22 00:13:56 compute-0 sudo[281000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:13:56 compute-0 sudo[281000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:56 compute-0 sudo[281000]: pam_unix(sudo:session): session closed for user root
Jan 22 00:13:56 compute-0 sudo[281025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:13:56 compute-0 sudo[281025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:56 compute-0 sudo[281025]: pam_unix(sudo:session): session closed for user root
Jan 22 00:13:56 compute-0 sudo[281050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:13:56 compute-0 sudo[281050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:56 compute-0 ceph-mon[74318]: pgmap v1688: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:13:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:13:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:13:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:13:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:13:56 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:13:56 compute-0 podman[281115]: 2026-01-22 00:13:56.746188418 +0000 UTC m=+0.038502284 container create c0b5fd4ae423750bce1c7c86429f1df7e7832b412bcc1f3a19512661fb252cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_albattani, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:13:56 compute-0 systemd[1]: Started libpod-conmon-c0b5fd4ae423750bce1c7c86429f1df7e7832b412bcc1f3a19512661fb252cf7.scope.
Jan 22 00:13:56 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:13:56 compute-0 podman[281115]: 2026-01-22 00:13:56.72658148 +0000 UTC m=+0.018895376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:13:56 compute-0 podman[281115]: 2026-01-22 00:13:56.835038503 +0000 UTC m=+0.127352449 container init c0b5fd4ae423750bce1c7c86429f1df7e7832b412bcc1f3a19512661fb252cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 00:13:56 compute-0 podman[281115]: 2026-01-22 00:13:56.842808763 +0000 UTC m=+0.135122669 container start c0b5fd4ae423750bce1c7c86429f1df7e7832b412bcc1f3a19512661fb252cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_albattani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 00:13:56 compute-0 podman[281115]: 2026-01-22 00:13:56.847725707 +0000 UTC m=+0.140039633 container attach c0b5fd4ae423750bce1c7c86429f1df7e7832b412bcc1f3a19512661fb252cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_albattani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:13:56 compute-0 focused_albattani[281132]: 167 167
Jan 22 00:13:56 compute-0 podman[281115]: 2026-01-22 00:13:56.849602635 +0000 UTC m=+0.141916511 container died c0b5fd4ae423750bce1c7c86429f1df7e7832b412bcc1f3a19512661fb252cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_albattani, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 00:13:56 compute-0 systemd[1]: libpod-c0b5fd4ae423750bce1c7c86429f1df7e7832b412bcc1f3a19512661fb252cf7.scope: Deactivated successfully.
Jan 22 00:13:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eb560d7db87dbcd0e4e6084b3daea4c787255272f993f8986b59d5bf185a78b-merged.mount: Deactivated successfully.
Jan 22 00:13:56 compute-0 podman[281115]: 2026-01-22 00:13:56.899496351 +0000 UTC m=+0.191810257 container remove c0b5fd4ae423750bce1c7c86429f1df7e7832b412bcc1f3a19512661fb252cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:13:56 compute-0 systemd[1]: libpod-conmon-c0b5fd4ae423750bce1c7c86429f1df7e7832b412bcc1f3a19512661fb252cf7.scope: Deactivated successfully.
Jan 22 00:13:57 compute-0 podman[281156]: 2026-01-22 00:13:57.059405819 +0000 UTC m=+0.039250698 container create 386fe95ddf2976696afe140b8e3e2f00627d406609403a29905126cee73b05bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 00:13:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:57 compute-0 systemd[1]: Started libpod-conmon-386fe95ddf2976696afe140b8e3e2f00627d406609403a29905126cee73b05bb.scope.
Jan 22 00:13:57 compute-0 podman[281156]: 2026-01-22 00:13:57.04298751 +0000 UTC m=+0.022832419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:13:57 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/973c2aa6c3183de4be3b5c7ebfe2d175f00d3c9c202d37e7bd21dca521896595/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/973c2aa6c3183de4be3b5c7ebfe2d175f00d3c9c202d37e7bd21dca521896595/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/973c2aa6c3183de4be3b5c7ebfe2d175f00d3c9c202d37e7bd21dca521896595/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/973c2aa6c3183de4be3b5c7ebfe2d175f00d3c9c202d37e7bd21dca521896595/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/973c2aa6c3183de4be3b5c7ebfe2d175f00d3c9c202d37e7bd21dca521896595/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:13:57 compute-0 podman[281156]: 2026-01-22 00:13:57.164343282 +0000 UTC m=+0.144188251 container init 386fe95ddf2976696afe140b8e3e2f00627d406609403a29905126cee73b05bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:13:57 compute-0 podman[281156]: 2026-01-22 00:13:57.17363198 +0000 UTC m=+0.153476869 container start 386fe95ddf2976696afe140b8e3e2f00627d406609403a29905126cee73b05bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:13:57 compute-0 podman[281156]: 2026-01-22 00:13:57.177740228 +0000 UTC m=+0.157585107 container attach 386fe95ddf2976696afe140b8e3e2f00627d406609403a29905126cee73b05bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_benz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 00:13:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:57.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:57.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:57 compute-0 gracious_benz[281172]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:13:57 compute-0 gracious_benz[281172]: --> relative data size: 1.0
Jan 22 00:13:57 compute-0 gracious_benz[281172]: --> All data devices are unavailable
Jan 22 00:13:58 compute-0 systemd[1]: libpod-386fe95ddf2976696afe140b8e3e2f00627d406609403a29905126cee73b05bb.scope: Deactivated successfully.
Jan 22 00:13:58 compute-0 podman[281156]: 2026-01-22 00:13:58.011793047 +0000 UTC m=+0.991637936 container died 386fe95ddf2976696afe140b8e3e2f00627d406609403a29905126cee73b05bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_benz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 00:13:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-973c2aa6c3183de4be3b5c7ebfe2d175f00d3c9c202d37e7bd21dca521896595-merged.mount: Deactivated successfully.
Jan 22 00:13:58 compute-0 podman[281156]: 2026-01-22 00:13:58.083089897 +0000 UTC m=+1.062934776 container remove 386fe95ddf2976696afe140b8e3e2f00627d406609403a29905126cee73b05bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_benz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 00:13:58 compute-0 systemd[1]: libpod-conmon-386fe95ddf2976696afe140b8e3e2f00627d406609403a29905126cee73b05bb.scope: Deactivated successfully.
Jan 22 00:13:58 compute-0 sudo[281050]: pam_unix(sudo:session): session closed for user root
Jan 22 00:13:58 compute-0 sudo[281201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:13:58 compute-0 sudo[281201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:58 compute-0 sudo[281201]: pam_unix(sudo:session): session closed for user root
Jan 22 00:13:58 compute-0 sudo[281226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:13:58 compute-0 sudo[281226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:58 compute-0 sudo[281226]: pam_unix(sudo:session): session closed for user root
Jan 22 00:13:58 compute-0 sudo[281251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:13:58 compute-0 sudo[281251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:58 compute-0 sudo[281251]: pam_unix(sudo:session): session closed for user root
Jan 22 00:13:58 compute-0 ceph-mon[74318]: pgmap v1689: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:58 compute-0 sudo[281276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:13:58 compute-0 sudo[281276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:58 compute-0 podman[281342]: 2026-01-22 00:13:58.868736005 +0000 UTC m=+0.060375844 container create 5dc0551f629facc29a748db2f524009a02ed2600caaa2e04a348e218b791eec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:13:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:13:58 compute-0 systemd[1]: Started libpod-conmon-5dc0551f629facc29a748db2f524009a02ed2600caaa2e04a348e218b791eec6.scope.
Jan 22 00:13:58 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:13:58 compute-0 podman[281342]: 2026-01-22 00:13:58.845727231 +0000 UTC m=+0.037367100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:13:58 compute-0 podman[281342]: 2026-01-22 00:13:58.953483302 +0000 UTC m=+0.145123171 container init 5dc0551f629facc29a748db2f524009a02ed2600caaa2e04a348e218b791eec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:13:58 compute-0 podman[281342]: 2026-01-22 00:13:58.961610275 +0000 UTC m=+0.153250114 container start 5dc0551f629facc29a748db2f524009a02ed2600caaa2e04a348e218b791eec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 00:13:58 compute-0 podman[281342]: 2026-01-22 00:13:58.965919448 +0000 UTC m=+0.157559287 container attach 5dc0551f629facc29a748db2f524009a02ed2600caaa2e04a348e218b791eec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilson, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 00:13:58 compute-0 reverent_wilson[281359]: 167 167
Jan 22 00:13:58 compute-0 systemd[1]: libpod-5dc0551f629facc29a748db2f524009a02ed2600caaa2e04a348e218b791eec6.scope: Deactivated successfully.
Jan 22 00:13:58 compute-0 podman[281342]: 2026-01-22 00:13:58.968132726 +0000 UTC m=+0.159772595 container died 5dc0551f629facc29a748db2f524009a02ed2600caaa2e04a348e218b791eec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:13:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-26feca073f296f923896c0fd7326a33048bec375eb81fc0da0632a01ae5a2019-merged.mount: Deactivated successfully.
Jan 22 00:13:59 compute-0 podman[281342]: 2026-01-22 00:13:59.016213527 +0000 UTC m=+0.207853366 container remove 5dc0551f629facc29a748db2f524009a02ed2600caaa2e04a348e218b791eec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:13:59 compute-0 systemd[1]: libpod-conmon-5dc0551f629facc29a748db2f524009a02ed2600caaa2e04a348e218b791eec6.scope: Deactivated successfully.
Jan 22 00:13:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:13:59 compute-0 podman[281382]: 2026-01-22 00:13:59.192648787 +0000 UTC m=+0.045207863 container create 2a98ead8c2562cb431a676949a83b42ac2200dcafdcabcfa956abf8a1b9a1bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sutherland, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 00:13:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:13:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:13:59.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:13:59 compute-0 systemd[1]: Started libpod-conmon-2a98ead8c2562cb431a676949a83b42ac2200dcafdcabcfa956abf8a1b9a1bf9.scope.
Jan 22 00:13:59 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:13:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4f6771dd737a669d426ef866bf12d84e421d00f05893224095b9252c82b6ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:13:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4f6771dd737a669d426ef866bf12d84e421d00f05893224095b9252c82b6ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:13:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4f6771dd737a669d426ef866bf12d84e421d00f05893224095b9252c82b6ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:13:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4f6771dd737a669d426ef866bf12d84e421d00f05893224095b9252c82b6ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:13:59 compute-0 podman[281382]: 2026-01-22 00:13:59.266997442 +0000 UTC m=+0.119556528 container init 2a98ead8c2562cb431a676949a83b42ac2200dcafdcabcfa956abf8a1b9a1bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:13:59 compute-0 podman[281382]: 2026-01-22 00:13:59.176002481 +0000 UTC m=+0.028561577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:13:59 compute-0 podman[281382]: 2026-01-22 00:13:59.27596504 +0000 UTC m=+0.128524116 container start 2a98ead8c2562cb431a676949a83b42ac2200dcafdcabcfa956abf8a1b9a1bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:13:59 compute-0 sudo[281398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:13:59 compute-0 podman[281382]: 2026-01-22 00:13:59.280004346 +0000 UTC m=+0.132563442 container attach 2a98ead8c2562cb431a676949a83b42ac2200dcafdcabcfa956abf8a1b9a1bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sutherland, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 00:13:59 compute-0 sudo[281398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:59 compute-0 sudo[281398]: pam_unix(sudo:session): session closed for user root
Jan 22 00:13:59 compute-0 sudo[281428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:13:59 compute-0 sudo[281428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:13:59 compute-0 sudo[281428]: pam_unix(sudo:session): session closed for user root
Jan 22 00:13:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:13:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:13:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:13:59.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]: {
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:     "1": [
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:         {
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:             "devices": [
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:                 "/dev/loop3"
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:             ],
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:             "lv_name": "ceph_lv0",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:             "lv_size": "7511998464",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:             "name": "ceph_lv0",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:             "tags": {
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:                 "ceph.cluster_name": "ceph",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:                 "ceph.crush_device_class": "",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:                 "ceph.encrypted": "0",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:                 "ceph.osd_id": "1",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:                 "ceph.type": "block",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:                 "ceph.vdo": "0"
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:             },
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:             "type": "block",
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:             "vg_name": "ceph_vg0"
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:         }
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]:     ]
Jan 22 00:14:00 compute-0 heuristic_sutherland[281401]: }
Jan 22 00:14:00 compute-0 systemd[1]: libpod-2a98ead8c2562cb431a676949a83b42ac2200dcafdcabcfa956abf8a1b9a1bf9.scope: Deactivated successfully.
Jan 22 00:14:00 compute-0 podman[281382]: 2026-01-22 00:14:00.055197259 +0000 UTC m=+0.907756345 container died 2a98ead8c2562cb431a676949a83b42ac2200dcafdcabcfa956abf8a1b9a1bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sutherland, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 00:14:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-de4f6771dd737a669d426ef866bf12d84e421d00f05893224095b9252c82b6ae-merged.mount: Deactivated successfully.
Jan 22 00:14:00 compute-0 podman[281382]: 2026-01-22 00:14:00.121791664 +0000 UTC m=+0.974350750 container remove 2a98ead8c2562cb431a676949a83b42ac2200dcafdcabcfa956abf8a1b9a1bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sutherland, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:14:00 compute-0 systemd[1]: libpod-conmon-2a98ead8c2562cb431a676949a83b42ac2200dcafdcabcfa956abf8a1b9a1bf9.scope: Deactivated successfully.
Jan 22 00:14:00 compute-0 sudo[281276]: pam_unix(sudo:session): session closed for user root
Jan 22 00:14:00 compute-0 sudo[281470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:14:00 compute-0 sudo[281470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:14:00 compute-0 sudo[281470]: pam_unix(sudo:session): session closed for user root
Jan 22 00:14:00 compute-0 sudo[281495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:14:00 compute-0 sudo[281495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:14:00 compute-0 sudo[281495]: pam_unix(sudo:session): session closed for user root
Jan 22 00:14:00 compute-0 sudo[281520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:14:00 compute-0 sudo[281520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:14:00 compute-0 sudo[281520]: pam_unix(sudo:session): session closed for user root
Jan 22 00:14:00 compute-0 sudo[281545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:14:00 compute-0 sudo[281545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:14:00 compute-0 ceph-mon[74318]: pgmap v1690: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:00 compute-0 podman[281610]: 2026-01-22 00:14:00.794950534 +0000 UTC m=+0.050540557 container create 24cd296a44b4b8963614108f0fe3d3703bd7dae4015a41737271e488e15028f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_carver, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 00:14:00 compute-0 systemd[1]: Started libpod-conmon-24cd296a44b4b8963614108f0fe3d3703bd7dae4015a41737271e488e15028f0.scope.
Jan 22 00:14:00 compute-0 podman[281610]: 2026-01-22 00:14:00.769068002 +0000 UTC m=+0.024658025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:14:00 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:14:00 compute-0 podman[281610]: 2026-01-22 00:14:00.88932451 +0000 UTC m=+0.144914533 container init 24cd296a44b4b8963614108f0fe3d3703bd7dae4015a41737271e488e15028f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_carver, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 00:14:00 compute-0 podman[281610]: 2026-01-22 00:14:00.897310988 +0000 UTC m=+0.152901001 container start 24cd296a44b4b8963614108f0fe3d3703bd7dae4015a41737271e488e15028f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_carver, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 00:14:00 compute-0 podman[281610]: 2026-01-22 00:14:00.902171128 +0000 UTC m=+0.157761141 container attach 24cd296a44b4b8963614108f0fe3d3703bd7dae4015a41737271e488e15028f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_carver, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 00:14:00 compute-0 relaxed_carver[281626]: 167 167
Jan 22 00:14:00 compute-0 systemd[1]: libpod-24cd296a44b4b8963614108f0fe3d3703bd7dae4015a41737271e488e15028f0.scope: Deactivated successfully.
Jan 22 00:14:00 compute-0 podman[281610]: 2026-01-22 00:14:00.904380817 +0000 UTC m=+0.159970860 container died 24cd296a44b4b8963614108f0fe3d3703bd7dae4015a41737271e488e15028f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_carver, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 00:14:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c50919f602fcddde45d79da85fd066f7646431abb910371eed46db99912b97f0-merged.mount: Deactivated successfully.
Jan 22 00:14:00 compute-0 podman[281610]: 2026-01-22 00:14:00.957838934 +0000 UTC m=+0.213428957 container remove 24cd296a44b4b8963614108f0fe3d3703bd7dae4015a41737271e488e15028f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_carver, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:14:00 compute-0 systemd[1]: libpod-conmon-24cd296a44b4b8963614108f0fe3d3703bd7dae4015a41737271e488e15028f0.scope: Deactivated successfully.
Jan 22 00:14:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:01 compute-0 podman[281649]: 2026-01-22 00:14:01.176064781 +0000 UTC m=+0.042026024 container create 5255dde8f591f747c6396ec7ff9a80806e14d8fe4f2846b89ac2111dd40317be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kepler, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:14:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:14:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:01.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:14:01 compute-0 systemd[1]: Started libpod-conmon-5255dde8f591f747c6396ec7ff9a80806e14d8fe4f2846b89ac2111dd40317be.scope.
Jan 22 00:14:01 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36a58a8bb27d6d4641935dc60b1d8d3e27d36b6c00e61bc7e134ebbb514cb0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36a58a8bb27d6d4641935dc60b1d8d3e27d36b6c00e61bc7e134ebbb514cb0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36a58a8bb27d6d4641935dc60b1d8d3e27d36b6c00e61bc7e134ebbb514cb0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36a58a8bb27d6d4641935dc60b1d8d3e27d36b6c00e61bc7e134ebbb514cb0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:14:01 compute-0 podman[281649]: 2026-01-22 00:14:01.155630737 +0000 UTC m=+0.021591990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:14:01 compute-0 podman[281649]: 2026-01-22 00:14:01.253097008 +0000 UTC m=+0.119058271 container init 5255dde8f591f747c6396ec7ff9a80806e14d8fe4f2846b89ac2111dd40317be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:14:01 compute-0 podman[281649]: 2026-01-22 00:14:01.262680796 +0000 UTC m=+0.128642059 container start 5255dde8f591f747c6396ec7ff9a80806e14d8fe4f2846b89ac2111dd40317be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kepler, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 00:14:01 compute-0 podman[281649]: 2026-01-22 00:14:01.267214026 +0000 UTC m=+0.133175289 container attach 5255dde8f591f747c6396ec7ff9a80806e14d8fe4f2846b89ac2111dd40317be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:14:01 compute-0 podman[281663]: 2026-01-22 00:14:01.345791482 +0000 UTC m=+0.123846901 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 00:14:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:01.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:02 compute-0 interesting_kepler[281666]: {
Jan 22 00:14:02 compute-0 interesting_kepler[281666]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:14:02 compute-0 interesting_kepler[281666]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:14:02 compute-0 interesting_kepler[281666]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:14:02 compute-0 interesting_kepler[281666]:         "osd_id": 1,
Jan 22 00:14:02 compute-0 interesting_kepler[281666]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:14:02 compute-0 interesting_kepler[281666]:         "type": "bluestore"
Jan 22 00:14:02 compute-0 interesting_kepler[281666]:     }
Jan 22 00:14:02 compute-0 interesting_kepler[281666]: }
Jan 22 00:14:02 compute-0 systemd[1]: libpod-5255dde8f591f747c6396ec7ff9a80806e14d8fe4f2846b89ac2111dd40317be.scope: Deactivated successfully.
Jan 22 00:14:02 compute-0 podman[281649]: 2026-01-22 00:14:02.228009694 +0000 UTC m=+1.093970927 container died 5255dde8f591f747c6396ec7ff9a80806e14d8fe4f2846b89ac2111dd40317be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:14:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a36a58a8bb27d6d4641935dc60b1d8d3e27d36b6c00e61bc7e134ebbb514cb0f-merged.mount: Deactivated successfully.
Jan 22 00:14:02 compute-0 podman[281649]: 2026-01-22 00:14:02.289757829 +0000 UTC m=+1.155719072 container remove 5255dde8f591f747c6396ec7ff9a80806e14d8fe4f2846b89ac2111dd40317be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 00:14:02 compute-0 systemd[1]: libpod-conmon-5255dde8f591f747c6396ec7ff9a80806e14d8fe4f2846b89ac2111dd40317be.scope: Deactivated successfully.
Jan 22 00:14:02 compute-0 sudo[281545]: pam_unix(sudo:session): session closed for user root
Jan 22 00:14:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:14:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:14:02 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:14:02 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:14:02 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 7e85ff10-75c4-4a28-a663-6c7b39561f85 does not exist
Jan 22 00:14:02 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 2b76ea7c-5a78-4357-8330-56cc26d99738 does not exist
Jan 22 00:14:02 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev b384c940-1d0d-4eb1-beb5-8c2c91dc3a7e does not exist
Jan 22 00:14:02 compute-0 sudo[281726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:14:02 compute-0 sudo[281726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:14:02 compute-0 sudo[281726]: pam_unix(sudo:session): session closed for user root
Jan 22 00:14:02 compute-0 ceph-mon[74318]: pgmap v1691: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:14:02 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:14:02 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 00:14:02 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 00:14:02 compute-0 sudo[281751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:14:02 compute-0 sudo[281751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:14:02 compute-0 sudo[281751]: pam_unix(sudo:session): session closed for user root
Jan 22 00:14:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:03.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:03.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:14:04 compute-0 ceph-mon[74318]: pgmap v1692: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:05.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:05.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:06 compute-0 ceph-mon[74318]: pgmap v1693: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:07.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:07.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:08 compute-0 ceph-mon[74318]: pgmap v1694: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:14:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:09.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:14:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:14:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:14:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:14:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:14:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:14:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:09.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:09 compute-0 podman[281781]: 2026-01-22 00:14:09.947948969 +0000 UTC m=+0.058457073 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Jan 22 00:14:10 compute-0 ceph-mon[74318]: pgmap v1695: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:11.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:11 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:14:11.712 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 00:14:11 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:14:11.714 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 00:14:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:14:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:11.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:14:12 compute-0 ceph-mon[74318]: pgmap v1696: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:13.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:14:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:13.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:14:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:14:15 compute-0 ceph-mon[74318]: pgmap v1697: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:14:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:15.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:14:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:15.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:16 compute-0 ceph-mon[74318]: pgmap v1698: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:17.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:14:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:17.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:14:18 compute-0 ceph-mon[74318]: pgmap v1699: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:14:18.895841) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040858895958, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1490, "num_deletes": 250, "total_data_size": 2695836, "memory_usage": 2739816, "flush_reason": "Manual Compaction"}
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040858909788, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 1544800, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36298, "largest_seqno": 37787, "table_properties": {"data_size": 1539677, "index_size": 2391, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13329, "raw_average_key_size": 20, "raw_value_size": 1528514, "raw_average_value_size": 2380, "num_data_blocks": 108, "num_entries": 642, "num_filter_entries": 642, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769040704, "oldest_key_time": 1769040704, "file_creation_time": 1769040858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 14044 microseconds, and 5759 cpu microseconds.
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:14:18.909882) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 1544800 bytes OK
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:14:18.909911) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:14:18.913629) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:14:18.913679) EVENT_LOG_v1 {"time_micros": 1769040858913664, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:14:18.913709) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2689560, prev total WAL file size 2689560, number of live WAL files 2.
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:14:18.915220) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323530' seq:72057594037927935, type:22 .. '6D6772737461740031353031' seq:0, type:0; will stop at (end)
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(1508KB)], [80(10MB)]
Jan 22 00:14:18 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040858915522, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 12182109, "oldest_snapshot_seqno": -1}
Jan 22 00:14:19 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6093 keys, 9427501 bytes, temperature: kUnknown
Jan 22 00:14:19 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040859022530, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 9427501, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9388030, "index_size": 23105, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 156244, "raw_average_key_size": 25, "raw_value_size": 9279412, "raw_average_value_size": 1522, "num_data_blocks": 933, "num_entries": 6093, "num_filter_entries": 6093, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769040858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:14:19 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:14:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:14:19.022871) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 9427501 bytes
Jan 22 00:14:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:14:19.024549) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.7 rd, 88.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 10.1 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(14.0) write-amplify(6.1) OK, records in: 6535, records dropped: 442 output_compression: NoCompression
Jan 22 00:14:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:14:19.024607) EVENT_LOG_v1 {"time_micros": 1769040859024592, "job": 46, "event": "compaction_finished", "compaction_time_micros": 107155, "compaction_time_cpu_micros": 61721, "output_level": 6, "num_output_files": 1, "total_output_size": 9427501, "num_input_records": 6535, "num_output_records": 6093, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 00:14:19 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:14:19 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040859025343, "job": 46, "event": "table_file_deletion", "file_number": 82}
Jan 22 00:14:19 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:14:19 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040859029260, "job": 46, "event": "table_file_deletion", "file_number": 80}
Jan 22 00:14:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:14:18.915069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:14:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:14:19.029331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:14:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:14:19.029336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:14:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:14:19.029338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:14:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:14:19.029340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:14:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:14:19.029342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:14:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:14:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:19.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:14:19 compute-0 sudo[281804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:14:19 compute-0 sudo[281804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:14:19 compute-0 sudo[281804]: pam_unix(sudo:session): session closed for user root
Jan 22 00:14:19 compute-0 sudo[281830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:14:19 compute-0 sudo[281830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:14:19 compute-0 sudo[281830]: pam_unix(sudo:session): session closed for user root
Jan 22 00:14:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:14:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:19.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:14:20 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:14:20.717 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 00:14:20 compute-0 ceph-mon[74318]: pgmap v1700: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:14:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:21.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:14:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:21.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:22 compute-0 ceph-mon[74318]: pgmap v1701: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:23.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:23.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:14:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3347332106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:14:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2607187094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:14:24 compute-0 ceph-mon[74318]: pgmap v1702: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:14:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:25.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:14:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:14:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:25.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:14:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3599632108' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:14:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/3599632108' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:14:26 compute-0 ceph-mon[74318]: pgmap v1703: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:26 compute-0 nova_compute[247516]: 2026-01-22 00:14:26.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:14:26 compute-0 nova_compute[247516]: 2026-01-22 00:14:26.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:14:26 compute-0 nova_compute[247516]: 2026-01-22 00:14:26.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:14:27 compute-0 nova_compute[247516]: 2026-01-22 00:14:27.007 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:14:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:14:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:27.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:14:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:27.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:14:29 compute-0 ceph-mon[74318]: pgmap v1704: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:14:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:29.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:14:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:14:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:29.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:14:31 compute-0 nova_compute[247516]: 2026-01-22 00:14:31.001 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:14:31 compute-0 ceph-mon[74318]: pgmap v1705: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:31.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:14:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:31.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:14:31 compute-0 nova_compute[247516]: 2026-01-22 00:14:31.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:14:31 compute-0 nova_compute[247516]: 2026-01-22 00:14:31.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:14:31 compute-0 podman[281861]: 2026-01-22 00:14:31.997728221 +0000 UTC m=+0.113872282 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 00:14:32 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/365479662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:14:33 compute-0 ceph-mon[74318]: pgmap v1706: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:33 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3785860618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:14:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:14:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:33.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:14:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:14:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:33.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:14:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:14:34 compute-0 ceph-mon[74318]: pgmap v1707: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:35.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:35.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:36 compute-0 ceph-mon[74318]: pgmap v1708: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:36 compute-0 nova_compute[247516]: 2026-01-22 00:14:36.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:14:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:37.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:37.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:37 compute-0 nova_compute[247516]: 2026-01-22 00:14:37.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:14:37 compute-0 nova_compute[247516]: 2026-01-22 00:14:37.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:14:38 compute-0 ceph-mon[74318]: pgmap v1709: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:14:38 compute-0 nova_compute[247516]: 2026-01-22 00:14:38.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:14:38 compute-0 nova_compute[247516]: 2026-01-22 00:14:38.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:14:38 compute-0 nova_compute[247516]: 2026-01-22 00:14:38.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:14:39 compute-0 nova_compute[247516]: 2026-01-22 00:14:39.023 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:14:39 compute-0 nova_compute[247516]: 2026-01-22 00:14:39.024 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:14:39 compute-0 nova_compute[247516]: 2026-01-22 00:14:39.025 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:14:39 compute-0 nova_compute[247516]: 2026-01-22 00:14:39.025 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:14:39 compute-0 nova_compute[247516]: 2026-01-22 00:14:39.026 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:14:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:39.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:14:39
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['backups', '.mgr', 'default.rgw.log', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', '.rgw.root', 'vms', 'cephfs.cephfs.data']
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:14:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:14:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3447828413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:14:39 compute-0 nova_compute[247516]: 2026-01-22 00:14:39.591 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:14:39 compute-0 sudo[281912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:14:39 compute-0 sudo[281912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:14:39 compute-0 sudo[281912]: pam_unix(sudo:session): session closed for user root
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:14:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:14:39 compute-0 sudo[281939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:14:39 compute-0 sudo[281939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:14:39 compute-0 sudo[281939]: pam_unix(sudo:session): session closed for user root
Jan 22 00:14:39 compute-0 nova_compute[247516]: 2026-01-22 00:14:39.771 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:14:39 compute-0 nova_compute[247516]: 2026-01-22 00:14:39.773 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5165MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:14:39 compute-0 nova_compute[247516]: 2026-01-22 00:14:39.773 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:14:39 compute-0 nova_compute[247516]: 2026-01-22 00:14:39.773 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:14:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:39.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:39 compute-0 nova_compute[247516]: 2026-01-22 00:14:39.877 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:14:39 compute-0 nova_compute[247516]: 2026-01-22 00:14:39.877 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:14:39 compute-0 nova_compute[247516]: 2026-01-22 00:14:39.877 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:14:39 compute-0 nova_compute[247516]: 2026-01-22 00:14:39.915 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:14:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:14:40 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4098506100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:14:40 compute-0 nova_compute[247516]: 2026-01-22 00:14:40.355 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:14:40 compute-0 nova_compute[247516]: 2026-01-22 00:14:40.366 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:14:40 compute-0 nova_compute[247516]: 2026-01-22 00:14:40.390 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:14:40 compute-0 nova_compute[247516]: 2026-01-22 00:14:40.393 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:14:40 compute-0 nova_compute[247516]: 2026-01-22 00:14:40.393 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:14:40 compute-0 ceph-mon[74318]: pgmap v1710: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:40 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3447828413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:14:40 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/4098506100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:14:40 compute-0 podman[281986]: 2026-01-22 00:14:40.962863362 +0000 UTC m=+0.073141899 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 00:14:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:14:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:41.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:14:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:14:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:41.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:14:42 compute-0 ceph-mon[74318]: pgmap v1711: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:43.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:43.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:14:44 compute-0 ceph-mon[74318]: pgmap v1712: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:45.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:45.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:46 compute-0 ceph-mon[74318]: pgmap v1713: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:14:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:47.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:14:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:47.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:48 compute-0 ceph-mon[74318]: pgmap v1714: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:14:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:14:48.776 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:14:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:14:48.776 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:14:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:14:48.777 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:14:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:14:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s
Jan 22 00:14:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:49.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:49.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:50 compute-0 ceph-mon[74318]: pgmap v1715: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s
Jan 22 00:14:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s
Jan 22 00:14:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:14:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:51.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:14:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:51.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:52 compute-0 ceph-mon[74318]: pgmap v1716: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s
Jan 22 00:14:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 22 00:14:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:53.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:53.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:14:54 compute-0 ceph-mon[74318]: pgmap v1717: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 22 00:14:54 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/343797328' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:14:54 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/343797328' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:14:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:14:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 596 B/s wr, 3 op/s
Jan 22 00:14:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:14:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:55.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:14:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:55.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:56 compute-0 ceph-mon[74318]: pgmap v1718: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 596 B/s wr, 3 op/s
Jan 22 00:14:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 596 B/s wr, 14 op/s
Jan 22 00:14:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:14:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:57.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:14:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:14:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:57.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:14:58 compute-0 ceph-mon[74318]: pgmap v1719: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 596 B/s wr, 14 op/s
Jan 22 00:14:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:14:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 596 B/s wr, 14 op/s
Jan 22 00:14:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:14:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:14:59.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:14:59 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:14:59.654 159050 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:c9:a5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7a:bc:c9:08:f9:98'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 00:14:59 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:14:59.657 159050 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 00:14:59 compute-0 sudo[282016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:14:59 compute-0 sudo[282016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:14:59 compute-0 sudo[282016]: pam_unix(sudo:session): session closed for user root
Jan 22 00:14:59 compute-0 sudo[282041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:14:59 compute-0 sudo[282041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:14:59 compute-0 sudo[282041]: pam_unix(sudo:session): session closed for user root
Jan 22 00:14:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:14:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:14:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:14:59.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:00 compute-0 ceph-mon[74318]: pgmap v1720: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 596 B/s wr, 14 op/s
Jan 22 00:15:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 426 B/s wr, 13 op/s
Jan 22 00:15:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:01.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:01.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:02 compute-0 ceph-mon[74318]: pgmap v1721: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 426 B/s wr, 13 op/s
Jan 22 00:15:02 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:15:02.659 159050 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2a76040-4536-46ac-93c9-20aa76f22ff4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 00:15:02 compute-0 sudo[282067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:15:02 compute-0 sudo[282067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:02 compute-0 sudo[282067]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:02 compute-0 sudo[282098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:15:02 compute-0 sudo[282098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:02 compute-0 sudo[282098]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:02 compute-0 sudo[282137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:15:02 compute-0 sudo[282137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:03 compute-0 sudo[282137]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:03 compute-0 podman[282090]: 2026-01-22 00:15:03.003067975 +0000 UTC m=+0.115253504 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 00:15:03 compute-0 sudo[282168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 00:15:03 compute-0 sudo[282168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 426 B/s wr, 13 op/s
Jan 22 00:15:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:03.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:03 compute-0 sudo[282168]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:15:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:03.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:15:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:15:04 compute-0 ceph-mon[74318]: pgmap v1722: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 426 B/s wr, 13 op/s
Jan 22 00:15:04 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 00:15:04 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:15:04 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 00:15:04 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:15:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 00:15:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:05.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:15:05 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:15:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:15:05 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:15:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:15:05 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:15:05 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev b6a9288b-acbe-446d-bfc8-0d104625fd0e does not exist
Jan 22 00:15:05 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 5a80ca74-7b9d-44ea-a4ec-78d45ccf4e85 does not exist
Jan 22 00:15:05 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 81bcdd56-95dd-4636-8c58-9ec70afde489 does not exist
Jan 22 00:15:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:15:05 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:15:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:15:05 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:15:05 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:15:05 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:15:05 compute-0 sudo[282225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:15:05 compute-0 sudo[282225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:05 compute-0 sudo[282225]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:05 compute-0 sudo[282250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:15:05 compute-0 sudo[282250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:05 compute-0 sudo[282250]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:05 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:15:05 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:15:05 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:15:05 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:15:05 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:15:05 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:15:05 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:15:05 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:15:05 compute-0 sudo[282275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:15:05 compute-0 sudo[282275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:05 compute-0 sudo[282275]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:15:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:05.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:05 compute-0 sudo[282300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:15:05 compute-0 sudo[282300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:06 compute-0 podman[282367]: 2026-01-22 00:15:06.297517184 +0000 UTC m=+0.064241993 container create 9cbc6adafc57f4ecb050b2022a29e147ea03a89ba33462661711b9652106d883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:15:06 compute-0 systemd[1]: Started libpod-conmon-9cbc6adafc57f4ecb050b2022a29e147ea03a89ba33462661711b9652106d883.scope.
Jan 22 00:15:06 compute-0 podman[282367]: 2026-01-22 00:15:06.26642834 +0000 UTC m=+0.033153199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:15:06 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:15:06 compute-0 podman[282367]: 2026-01-22 00:15:06.39577573 +0000 UTC m=+0.162500529 container init 9cbc6adafc57f4ecb050b2022a29e147ea03a89ba33462661711b9652106d883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:15:06 compute-0 podman[282367]: 2026-01-22 00:15:06.40477797 +0000 UTC m=+0.171502749 container start 9cbc6adafc57f4ecb050b2022a29e147ea03a89ba33462661711b9652106d883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:15:06 compute-0 podman[282367]: 2026-01-22 00:15:06.409589439 +0000 UTC m=+0.176314238 container attach 9cbc6adafc57f4ecb050b2022a29e147ea03a89ba33462661711b9652106d883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:15:06 compute-0 pensive_mirzakhani[282383]: 167 167
Jan 22 00:15:06 compute-0 systemd[1]: libpod-9cbc6adafc57f4ecb050b2022a29e147ea03a89ba33462661711b9652106d883.scope: Deactivated successfully.
Jan 22 00:15:06 compute-0 podman[282367]: 2026-01-22 00:15:06.416824324 +0000 UTC m=+0.183549113 container died 9cbc6adafc57f4ecb050b2022a29e147ea03a89ba33462661711b9652106d883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:15:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e97e1ef3978b41bbd52b900ab26b5a68e6a4f987439ef1e28921710babe8630-merged.mount: Deactivated successfully.
Jan 22 00:15:06 compute-0 podman[282367]: 2026-01-22 00:15:06.464650376 +0000 UTC m=+0.231375165 container remove 9cbc6adafc57f4ecb050b2022a29e147ea03a89ba33462661711b9652106d883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:15:06 compute-0 systemd[1]: libpod-conmon-9cbc6adafc57f4ecb050b2022a29e147ea03a89ba33462661711b9652106d883.scope: Deactivated successfully.
Jan 22 00:15:06 compute-0 podman[282405]: 2026-01-22 00:15:06.660991023 +0000 UTC m=+0.064187041 container create 1d46dab3e1629c343e509ebef45aca45253a05a9dc41885f8114dabc4017faef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Jan 22 00:15:06 compute-0 systemd[1]: Started libpod-conmon-1d46dab3e1629c343e509ebef45aca45253a05a9dc41885f8114dabc4017faef.scope.
Jan 22 00:15:06 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:15:06 compute-0 podman[282405]: 2026-01-22 00:15:06.635589266 +0000 UTC m=+0.038785334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14496ddac73b5e0a46d607cadc178a5e3d478da2ea461157f41f2e5c776a6b4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14496ddac73b5e0a46d607cadc178a5e3d478da2ea461157f41f2e5c776a6b4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14496ddac73b5e0a46d607cadc178a5e3d478da2ea461157f41f2e5c776a6b4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14496ddac73b5e0a46d607cadc178a5e3d478da2ea461157f41f2e5c776a6b4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14496ddac73b5e0a46d607cadc178a5e3d478da2ea461157f41f2e5c776a6b4e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:15:06 compute-0 podman[282405]: 2026-01-22 00:15:06.74605227 +0000 UTC m=+0.149248368 container init 1d46dab3e1629c343e509ebef45aca45253a05a9dc41885f8114dabc4017faef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:15:06 compute-0 podman[282405]: 2026-01-22 00:15:06.760453377 +0000 UTC m=+0.163649395 container start 1d46dab3e1629c343e509ebef45aca45253a05a9dc41885f8114dabc4017faef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 00:15:06 compute-0 podman[282405]: 2026-01-22 00:15:06.76407555 +0000 UTC m=+0.167271608 container attach 1d46dab3e1629c343e509ebef45aca45253a05a9dc41885f8114dabc4017faef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:15:06 compute-0 ceph-mon[74318]: pgmap v1723: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 00:15:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 10 op/s
Jan 22 00:15:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:07.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:07 compute-0 laughing_perlman[282422]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:15:07 compute-0 laughing_perlman[282422]: --> relative data size: 1.0
Jan 22 00:15:07 compute-0 laughing_perlman[282422]: --> All data devices are unavailable
Jan 22 00:15:07 compute-0 systemd[1]: libpod-1d46dab3e1629c343e509ebef45aca45253a05a9dc41885f8114dabc4017faef.scope: Deactivated successfully.
Jan 22 00:15:07 compute-0 podman[282405]: 2026-01-22 00:15:07.695228908 +0000 UTC m=+1.098424926 container died 1d46dab3e1629c343e509ebef45aca45253a05a9dc41885f8114dabc4017faef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_perlman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 00:15:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-14496ddac73b5e0a46d607cadc178a5e3d478da2ea461157f41f2e5c776a6b4e-merged.mount: Deactivated successfully.
Jan 22 00:15:07 compute-0 podman[282405]: 2026-01-22 00:15:07.763820555 +0000 UTC m=+1.167016623 container remove 1d46dab3e1629c343e509ebef45aca45253a05a9dc41885f8114dabc4017faef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:15:07 compute-0 systemd[1]: libpod-conmon-1d46dab3e1629c343e509ebef45aca45253a05a9dc41885f8114dabc4017faef.scope: Deactivated successfully.
Jan 22 00:15:07 compute-0 sudo[282300]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:15:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:07.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:15:07 compute-0 sudo[282451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:15:07 compute-0 sudo[282451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:07 compute-0 sudo[282451]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:07 compute-0 sudo[282476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:15:07 compute-0 sudo[282476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:07 compute-0 sudo[282476]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:08 compute-0 sudo[282501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:15:08 compute-0 sudo[282501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:08 compute-0 sudo[282501]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:08 compute-0 sudo[282526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:15:08 compute-0 sudo[282526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:08 compute-0 podman[282593]: 2026-01-22 00:15:08.433070873 +0000 UTC m=+0.041898790 container create 58219488301fb130f472df5cdafe3167cb9cc34537adedeca94eaad5fbb4c495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:15:08 compute-0 systemd[1]: Started libpod-conmon-58219488301fb130f472df5cdafe3167cb9cc34537adedeca94eaad5fbb4c495.scope.
Jan 22 00:15:08 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:15:08 compute-0 podman[282593]: 2026-01-22 00:15:08.413799996 +0000 UTC m=+0.022627943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:15:08 compute-0 podman[282593]: 2026-01-22 00:15:08.515729627 +0000 UTC m=+0.124557564 container init 58219488301fb130f472df5cdafe3167cb9cc34537adedeca94eaad5fbb4c495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:15:08 compute-0 podman[282593]: 2026-01-22 00:15:08.522433795 +0000 UTC m=+0.131261692 container start 58219488301fb130f472df5cdafe3167cb9cc34537adedeca94eaad5fbb4c495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:15:08 compute-0 thirsty_zhukovsky[282610]: 167 167
Jan 22 00:15:08 compute-0 systemd[1]: libpod-58219488301fb130f472df5cdafe3167cb9cc34537adedeca94eaad5fbb4c495.scope: Deactivated successfully.
Jan 22 00:15:08 compute-0 podman[282593]: 2026-01-22 00:15:08.526685066 +0000 UTC m=+0.135512973 container attach 58219488301fb130f472df5cdafe3167cb9cc34537adedeca94eaad5fbb4c495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:15:08 compute-0 podman[282593]: 2026-01-22 00:15:08.528361498 +0000 UTC m=+0.137189425 container died 58219488301fb130f472df5cdafe3167cb9cc34537adedeca94eaad5fbb4c495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:15:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbfd7381fee4ca25425afb987c0687a5e2c35b6222b86ffa3bcf1a6776749b54-merged.mount: Deactivated successfully.
Jan 22 00:15:08 compute-0 podman[282593]: 2026-01-22 00:15:08.574443617 +0000 UTC m=+0.183271524 container remove 58219488301fb130f472df5cdafe3167cb9cc34537adedeca94eaad5fbb4c495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Jan 22 00:15:08 compute-0 systemd[1]: libpod-conmon-58219488301fb130f472df5cdafe3167cb9cc34537adedeca94eaad5fbb4c495.scope: Deactivated successfully.
Jan 22 00:15:08 compute-0 podman[282634]: 2026-01-22 00:15:08.752518608 +0000 UTC m=+0.055515113 container create ab1dc4a299dff724ad35f9eef3c4ccd9f3bbf748cc27d60641cc0c2b3595162a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 00:15:08 compute-0 systemd[1]: Started libpod-conmon-ab1dc4a299dff724ad35f9eef3c4ccd9f3bbf748cc27d60641cc0c2b3595162a.scope.
Jan 22 00:15:08 compute-0 podman[282634]: 2026-01-22 00:15:08.731085603 +0000 UTC m=+0.034082098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:15:08 compute-0 ceph-mon[74318]: pgmap v1724: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 10 op/s
Jan 22 00:15:08 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91455d55e8a1aed8c08f4bab114acb542aa3a4087809389cd6818ad92212495e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91455d55e8a1aed8c08f4bab114acb542aa3a4087809389cd6818ad92212495e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91455d55e8a1aed8c08f4bab114acb542aa3a4087809389cd6818ad92212495e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91455d55e8a1aed8c08f4bab114acb542aa3a4087809389cd6818ad92212495e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:15:08 compute-0 podman[282634]: 2026-01-22 00:15:08.893548963 +0000 UTC m=+0.196545468 container init ab1dc4a299dff724ad35f9eef3c4ccd9f3bbf748cc27d60641cc0c2b3595162a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:15:08 compute-0 podman[282634]: 2026-01-22 00:15:08.900376745 +0000 UTC m=+0.203373220 container start ab1dc4a299dff724ad35f9eef3c4ccd9f3bbf748cc27d60641cc0c2b3595162a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:15:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:15:08 compute-0 podman[282634]: 2026-01-22 00:15:08.904209833 +0000 UTC m=+0.207206308 container attach ab1dc4a299dff724ad35f9eef3c4ccd9f3bbf748cc27d60641cc0c2b3595162a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_galois, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:15:09 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 00:15:09 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 41K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 12K writes, 4138 syncs, 3.14 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1488 writes, 3861 keys, 1488 commit groups, 1.0 writes per commit group, ingest: 1.74 MB, 0.00 MB/s
                                           Interval WAL: 1488 writes, 690 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 00:15:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:15:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:15:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:15:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:15:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:15:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:15:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:15:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:09.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:09 compute-0 nifty_galois[282650]: {
Jan 22 00:15:09 compute-0 nifty_galois[282650]:     "1": [
Jan 22 00:15:09 compute-0 nifty_galois[282650]:         {
Jan 22 00:15:09 compute-0 nifty_galois[282650]:             "devices": [
Jan 22 00:15:09 compute-0 nifty_galois[282650]:                 "/dev/loop3"
Jan 22 00:15:09 compute-0 nifty_galois[282650]:             ],
Jan 22 00:15:09 compute-0 nifty_galois[282650]:             "lv_name": "ceph_lv0",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:             "lv_size": "7511998464",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:             "name": "ceph_lv0",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:             "tags": {
Jan 22 00:15:09 compute-0 nifty_galois[282650]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:                 "ceph.cluster_name": "ceph",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:                 "ceph.crush_device_class": "",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:                 "ceph.encrypted": "0",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:                 "ceph.osd_id": "1",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:                 "ceph.type": "block",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:                 "ceph.vdo": "0"
Jan 22 00:15:09 compute-0 nifty_galois[282650]:             },
Jan 22 00:15:09 compute-0 nifty_galois[282650]:             "type": "block",
Jan 22 00:15:09 compute-0 nifty_galois[282650]:             "vg_name": "ceph_vg0"
Jan 22 00:15:09 compute-0 nifty_galois[282650]:         }
Jan 22 00:15:09 compute-0 nifty_galois[282650]:     ]
Jan 22 00:15:09 compute-0 nifty_galois[282650]: }
Jan 22 00:15:09 compute-0 systemd[1]: libpod-ab1dc4a299dff724ad35f9eef3c4ccd9f3bbf748cc27d60641cc0c2b3595162a.scope: Deactivated successfully.
Jan 22 00:15:09 compute-0 podman[282634]: 2026-01-22 00:15:09.72378242 +0000 UTC m=+1.026778915 container died ab1dc4a299dff724ad35f9eef3c4ccd9f3bbf748cc27d60641cc0c2b3595162a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_galois, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 00:15:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-91455d55e8a1aed8c08f4bab114acb542aa3a4087809389cd6818ad92212495e-merged.mount: Deactivated successfully.
Jan 22 00:15:09 compute-0 podman[282634]: 2026-01-22 00:15:09.811258322 +0000 UTC m=+1.114254777 container remove ab1dc4a299dff724ad35f9eef3c4ccd9f3bbf748cc27d60641cc0c2b3595162a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_galois, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 00:15:09 compute-0 systemd[1]: libpod-conmon-ab1dc4a299dff724ad35f9eef3c4ccd9f3bbf748cc27d60641cc0c2b3595162a.scope: Deactivated successfully.
Jan 22 00:15:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:09.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:09 compute-0 sudo[282526]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:09 compute-0 sudo[282672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:15:09 compute-0 sudo[282672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:09 compute-0 sudo[282672]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:09 compute-0 sudo[282697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:15:09 compute-0 sudo[282697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:09 compute-0 sudo[282697]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:10 compute-0 sudo[282722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:15:10 compute-0 sudo[282722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:10 compute-0 sudo[282722]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:10 compute-0 sudo[282747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:15:10 compute-0 sudo[282747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:10 compute-0 podman[282814]: 2026-01-22 00:15:10.515734584 +0000 UTC m=+0.038477075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:15:10 compute-0 podman[282814]: 2026-01-22 00:15:10.8097637 +0000 UTC m=+0.332506171 container create 18f58b3e6f2ee5165017e6b6a812c59e4d03c3bda6a117ad4b60c41451ba196f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:15:10 compute-0 ceph-mon[74318]: pgmap v1725: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:11 compute-0 systemd[1]: Started libpod-conmon-18f58b3e6f2ee5165017e6b6a812c59e4d03c3bda6a117ad4b60c41451ba196f.scope.
Jan 22 00:15:11 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:15:11 compute-0 podman[282814]: 2026-01-22 00:15:11.051826414 +0000 UTC m=+0.574568935 container init 18f58b3e6f2ee5165017e6b6a812c59e4d03c3bda6a117ad4b60c41451ba196f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:15:11 compute-0 podman[282814]: 2026-01-22 00:15:11.057842921 +0000 UTC m=+0.580585392 container start 18f58b3e6f2ee5165017e6b6a812c59e4d03c3bda6a117ad4b60c41451ba196f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 00:15:11 compute-0 upbeat_liskov[282830]: 167 167
Jan 22 00:15:11 compute-0 systemd[1]: libpod-18f58b3e6f2ee5165017e6b6a812c59e4d03c3bda6a117ad4b60c41451ba196f.scope: Deactivated successfully.
Jan 22 00:15:11 compute-0 podman[282814]: 2026-01-22 00:15:11.062500125 +0000 UTC m=+0.585242596 container attach 18f58b3e6f2ee5165017e6b6a812c59e4d03c3bda6a117ad4b60c41451ba196f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_liskov, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:15:11 compute-0 podman[282814]: 2026-01-22 00:15:11.062847896 +0000 UTC m=+0.585590367 container died 18f58b3e6f2ee5165017e6b6a812c59e4d03c3bda6a117ad4b60c41451ba196f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 00:15:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b20f77d16c6197eb22d23954c4b3b84ba726fb99b1ace8bb38f38e4f45ee5d4-merged.mount: Deactivated successfully.
Jan 22 00:15:11 compute-0 podman[282814]: 2026-01-22 00:15:11.100023909 +0000 UTC m=+0.622766380 container remove 18f58b3e6f2ee5165017e6b6a812c59e4d03c3bda6a117ad4b60c41451ba196f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_liskov, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:15:11 compute-0 systemd[1]: libpod-conmon-18f58b3e6f2ee5165017e6b6a812c59e4d03c3bda6a117ad4b60c41451ba196f.scope: Deactivated successfully.
Jan 22 00:15:11 compute-0 podman[282832]: 2026-01-22 00:15:11.127229682 +0000 UTC m=+0.085975106 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 00:15:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:11 compute-0 podman[282875]: 2026-01-22 00:15:11.256535281 +0000 UTC m=+0.038257097 container create ba0e141e44508ffd37f015c6f183adbb31e8437664aab2b1e8e1839caa889f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pasteur, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:15:11 compute-0 systemd[1]: Started libpod-conmon-ba0e141e44508ffd37f015c6f183adbb31e8437664aab2b1e8e1839caa889f16.scope.
Jan 22 00:15:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:11.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:11 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9859544894f33b8267e1ec35d0d77159256cee75cfe38a13d81be09996757e62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9859544894f33b8267e1ec35d0d77159256cee75cfe38a13d81be09996757e62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9859544894f33b8267e1ec35d0d77159256cee75cfe38a13d81be09996757e62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:15:11 compute-0 podman[282875]: 2026-01-22 00:15:11.239062649 +0000 UTC m=+0.020784485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9859544894f33b8267e1ec35d0d77159256cee75cfe38a13d81be09996757e62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:15:11 compute-0 podman[282875]: 2026-01-22 00:15:11.352315311 +0000 UTC m=+0.134037197 container init ba0e141e44508ffd37f015c6f183adbb31e8437664aab2b1e8e1839caa889f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pasteur, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:15:11 compute-0 podman[282875]: 2026-01-22 00:15:11.360478044 +0000 UTC m=+0.142199860 container start ba0e141e44508ffd37f015c6f183adbb31e8437664aab2b1e8e1839caa889f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pasteur, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:15:11 compute-0 podman[282875]: 2026-01-22 00:15:11.36553427 +0000 UTC m=+0.147256186 container attach ba0e141e44508ffd37f015c6f183adbb31e8437664aab2b1e8e1839caa889f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pasteur, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:15:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:15:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:11.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:12 compute-0 ceph-mon[74318]: pgmap v1726: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:12 compute-0 angry_pasteur[282892]: {
Jan 22 00:15:12 compute-0 angry_pasteur[282892]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:15:12 compute-0 angry_pasteur[282892]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:15:12 compute-0 angry_pasteur[282892]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:15:12 compute-0 angry_pasteur[282892]:         "osd_id": 1,
Jan 22 00:15:12 compute-0 angry_pasteur[282892]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:15:12 compute-0 angry_pasteur[282892]:         "type": "bluestore"
Jan 22 00:15:12 compute-0 angry_pasteur[282892]:     }
Jan 22 00:15:12 compute-0 angry_pasteur[282892]: }
Jan 22 00:15:12 compute-0 systemd[1]: libpod-ba0e141e44508ffd37f015c6f183adbb31e8437664aab2b1e8e1839caa889f16.scope: Deactivated successfully.
Jan 22 00:15:12 compute-0 conmon[282892]: conmon ba0e141e44508ffd37f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba0e141e44508ffd37f015c6f183adbb31e8437664aab2b1e8e1839caa889f16.scope/container/memory.events
Jan 22 00:15:12 compute-0 podman[282914]: 2026-01-22 00:15:12.356542435 +0000 UTC m=+0.040628740 container died ba0e141e44508ffd37f015c6f183adbb31e8437664aab2b1e8e1839caa889f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 00:15:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9859544894f33b8267e1ec35d0d77159256cee75cfe38a13d81be09996757e62-merged.mount: Deactivated successfully.
Jan 22 00:15:12 compute-0 podman[282914]: 2026-01-22 00:15:12.42766025 +0000 UTC m=+0.111746475 container remove ba0e141e44508ffd37f015c6f183adbb31e8437664aab2b1e8e1839caa889f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 00:15:12 compute-0 systemd[1]: libpod-conmon-ba0e141e44508ffd37f015c6f183adbb31e8437664aab2b1e8e1839caa889f16.scope: Deactivated successfully.
Jan 22 00:15:12 compute-0 sudo[282747]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:15:12 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:15:12 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:15:12 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:15:12 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 0ad9356b-0be5-4031-9995-67917badad97 does not exist
Jan 22 00:15:12 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 28c3302d-12ad-41dd-b025-0f519dea63cb does not exist
Jan 22 00:15:12 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev dde289d6-4b38-4aa5-96c7-ac33b07f92b3 does not exist
Jan 22 00:15:12 compute-0 sudo[282929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:15:12 compute-0 sudo[282929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:12 compute-0 sudo[282929]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:12 compute-0 sudo[282954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:15:12 compute-0 sudo[282954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:12 compute-0 sudo[282954]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:15:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:13.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:13 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:15:13 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:15:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:15:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:13.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:15:14 compute-0 ceph-mon[74318]: pgmap v1727: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:15:14.542048) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040914542068, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 744, "num_deletes": 251, "total_data_size": 1026978, "memory_usage": 1041944, "flush_reason": "Manual Compaction"}
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040914551110, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1015867, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37788, "largest_seqno": 38531, "table_properties": {"data_size": 1012016, "index_size": 1631, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8777, "raw_average_key_size": 19, "raw_value_size": 1004280, "raw_average_value_size": 2251, "num_data_blocks": 72, "num_entries": 446, "num_filter_entries": 446, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769040859, "oldest_key_time": 1769040859, "file_creation_time": 1769040914, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 9126 microseconds, and 2912 cpu microseconds.
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:15:14.551166) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1015867 bytes OK
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:15:14.551186) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:15:14.552839) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:15:14.552850) EVENT_LOG_v1 {"time_micros": 1769040914552846, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:15:14.552863) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1023259, prev total WAL file size 1023259, number of live WAL files 2.
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:15:14.553614) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(992KB)], [83(9206KB)]
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040914553827, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 10443368, "oldest_snapshot_seqno": -1}
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6023 keys, 8464293 bytes, temperature: kUnknown
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040914668844, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8464293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8426277, "index_size": 21842, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 155475, "raw_average_key_size": 25, "raw_value_size": 8319792, "raw_average_value_size": 1381, "num_data_blocks": 874, "num_entries": 6023, "num_filter_entries": 6023, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769040914, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:15:14.669552) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8464293 bytes
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:15:14.672471) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.6 rd, 73.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 9.0 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(18.6) write-amplify(8.3) OK, records in: 6539, records dropped: 516 output_compression: NoCompression
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:15:14.672501) EVENT_LOG_v1 {"time_micros": 1769040914672487, "job": 48, "event": "compaction_finished", "compaction_time_micros": 115331, "compaction_time_cpu_micros": 39290, "output_level": 6, "num_output_files": 1, "total_output_size": 8464293, "num_input_records": 6539, "num_output_records": 6023, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040914673590, "job": 48, "event": "table_file_deletion", "file_number": 85}
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769040914677017, "job": 48, "event": "table_file_deletion", "file_number": 83}
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:15:14.553208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:15:14.677321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:15:14.677330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:15:14.677335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:15:14.677340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:15:14 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:15:14.677349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:15:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:15.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:15:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:15.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:16 compute-0 ceph-mon[74318]: pgmap v1728: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:15:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:17.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:15:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:17.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:18 compute-0 ceph-mon[74318]: pgmap v1729: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:15:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:15:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:19.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:15:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:19.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:19 compute-0 sudo[282983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:15:19 compute-0 sudo[282983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:19 compute-0 sudo[282983]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:19 compute-0 sudo[283008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:15:19 compute-0 sudo[283008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:19 compute-0 sudo[283008]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:20 compute-0 ceph-mgr[74614]: [devicehealth INFO root] Check health
Jan 22 00:15:20 compute-0 ceph-mon[74318]: pgmap v1730: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:21.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:21.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:22 compute-0 ceph-mon[74318]: pgmap v1731: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:23.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:15:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:23.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:15:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:15:24 compute-0 ceph-mon[74318]: pgmap v1732: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:15:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:25.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:25.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/4229406120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:15:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/680752849' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:15:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/680752849' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:15:27 compute-0 ceph-mon[74318]: pgmap v1733: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/4092844319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:15:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:27.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:27.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:28 compute-0 ceph-mon[74318]: pgmap v1734: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:15:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:29.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:29 compute-0 nova_compute[247516]: 2026-01-22 00:15:29.396 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:15:29 compute-0 nova_compute[247516]: 2026-01-22 00:15:29.397 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:15:29 compute-0 nova_compute[247516]: 2026-01-22 00:15:29.398 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:15:29 compute-0 nova_compute[247516]: 2026-01-22 00:15:29.413 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:15:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:29.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:30 compute-0 ceph-mon[74318]: pgmap v1735: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:30 compute-0 nova_compute[247516]: 2026-01-22 00:15:30.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:15:30 compute-0 nova_compute[247516]: 2026-01-22 00:15:30.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:15:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:31.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:31.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:32 compute-0 nova_compute[247516]: 2026-01-22 00:15:32.004 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:15:32 compute-0 nova_compute[247516]: 2026-01-22 00:15:32.005 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:15:32 compute-0 nova_compute[247516]: 2026-01-22 00:15:32.005 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:15:32 compute-0 nova_compute[247516]: 2026-01-22 00:15:32.005 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 22 00:15:32 compute-0 ceph-mon[74318]: pgmap v1736: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:32 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3331378212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:15:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:33.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:15:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:33.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:15:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:15:34 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/989123868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:15:34 compute-0 podman[283040]: 2026-01-22 00:15:34.06105642 +0000 UTC m=+0.157383981 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 22 00:15:35 compute-0 ceph-mon[74318]: pgmap v1737: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:15:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:35.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:35.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:36 compute-0 ceph-mon[74318]: pgmap v1738: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:37.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:37.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:38 compute-0 nova_compute[247516]: 2026-01-22 00:15:38.006 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:15:38 compute-0 nova_compute[247516]: 2026-01-22 00:15:38.007 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:15:38 compute-0 nova_compute[247516]: 2026-01-22 00:15:38.007 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:15:38 compute-0 ceph-mon[74318]: pgmap v1739: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:15:38 compute-0 nova_compute[247516]: 2026-01-22 00:15:38.987 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:15:39
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control']
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:15:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:15:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:39.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:15:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:15:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:15:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:39.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:15:39 compute-0 nova_compute[247516]: 2026-01-22 00:15:39.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:15:40 compute-0 nova_compute[247516]: 2026-01-22 00:15:40.015 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:15:40 compute-0 nova_compute[247516]: 2026-01-22 00:15:40.016 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:15:40 compute-0 nova_compute[247516]: 2026-01-22 00:15:40.016 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:15:40 compute-0 nova_compute[247516]: 2026-01-22 00:15:40.016 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:15:40 compute-0 nova_compute[247516]: 2026-01-22 00:15:40.017 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:15:40 compute-0 sudo[283071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:15:40 compute-0 sudo[283071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:40 compute-0 sudo[283071]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:40 compute-0 sudo[283097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:15:40 compute-0 sudo[283097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:15:40 compute-0 sudo[283097]: pam_unix(sudo:session): session closed for user root
Jan 22 00:15:40 compute-0 ceph-mon[74318]: pgmap v1740: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:40 compute-0 nova_compute[247516]: 2026-01-22 00:15:40.500 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:15:40 compute-0 nova_compute[247516]: 2026-01-22 00:15:40.675 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:15:40 compute-0 nova_compute[247516]: 2026-01-22 00:15:40.677 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5173MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:15:40 compute-0 nova_compute[247516]: 2026-01-22 00:15:40.677 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:15:40 compute-0 nova_compute[247516]: 2026-01-22 00:15:40.677 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:15:40 compute-0 nova_compute[247516]: 2026-01-22 00:15:40.817 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:15:40 compute-0 nova_compute[247516]: 2026-01-22 00:15:40.818 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:15:40 compute-0 nova_compute[247516]: 2026-01-22 00:15:40.818 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:15:40 compute-0 nova_compute[247516]: 2026-01-22 00:15:40.863 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:15:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:41 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:15:41 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1515949975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:15:41 compute-0 nova_compute[247516]: 2026-01-22 00:15:41.324 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:15:41 compute-0 nova_compute[247516]: 2026-01-22 00:15:41.333 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:15:41 compute-0 nova_compute[247516]: 2026-01-22 00:15:41.355 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:15:41 compute-0 nova_compute[247516]: 2026-01-22 00:15:41.358 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:15:41 compute-0 nova_compute[247516]: 2026-01-22 00:15:41.358 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:15:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:41.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:41 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3110393988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:15:41 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1515949975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:15:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:41.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:41 compute-0 podman[283166]: 2026-01-22 00:15:41.924202546 +0000 UTC m=+0.047097812 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 00:15:42 compute-0 nova_compute[247516]: 2026-01-22 00:15:42.360 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:15:42 compute-0 nova_compute[247516]: 2026-01-22 00:15:42.360 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:15:42 compute-0 ceph-mon[74318]: pgmap v1741: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:43.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:15:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:43.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:15:43 compute-0 nova_compute[247516]: 2026-01-22 00:15:43.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:15:43 compute-0 nova_compute[247516]: 2026-01-22 00:15:43.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 22 00:15:44 compute-0 nova_compute[247516]: 2026-01-22 00:15:44.017 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 22 00:15:44 compute-0 ceph-mon[74318]: pgmap v1742: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:15:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:45.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:45.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:46 compute-0 ceph-mon[74318]: pgmap v1743: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:47.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:47.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:48 compute-0 ceph-mon[74318]: pgmap v1744: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:15:48.777 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:15:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:15:48.778 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:15:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:15:48.779 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:15:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:15:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:15:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:49.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:15:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:15:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:49.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:50 compute-0 ceph-mon[74318]: pgmap v1745: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:51.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:51.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:52 compute-0 ceph-mon[74318]: pgmap v1746: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:53.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:15:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:53.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:15:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:15:54 compute-0 ceph-mon[74318]: pgmap v1747: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:15:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:15:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:55.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:55.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:56 compute-0 ceph-mon[74318]: pgmap v1748: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:15:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:57.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:15:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:57.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:58 compute-0 ceph-mon[74318]: pgmap v1749: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:15:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:15:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:15:59.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:15:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:15:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:15:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:15:59.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:00 compute-0 sudo[283195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:16:00 compute-0 sudo[283195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:00 compute-0 sudo[283195]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:00 compute-0 sudo[283220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:16:00 compute-0 sudo[283220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:00 compute-0 sudo[283220]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:00 compute-0 ceph-mon[74318]: pgmap v1750: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:16:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:01.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:16:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:01.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:16:03 compute-0 ceph-mon[74318]: pgmap v1751: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:03.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:03.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:16:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:16:05 compute-0 podman[283247]: 2026-01-22 00:16:05.056499089 +0000 UTC m=+0.159518917 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 00:16:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:05 compute-0 ceph-mon[74318]: pgmap v1752: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:05.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:05.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:16:06 compute-0 ceph-mon[74318]: pgmap v1753: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:07.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:07.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:16:08 compute-0 nova_compute[247516]: 2026-01-22 00:16:08.070 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:16:08 compute-0 ceph-mon[74318]: pgmap v1754: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:16:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:16:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:16:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:16:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:16:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:16:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:16:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:09.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:09.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:10 compute-0 ceph-mon[74318]: pgmap v1755: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:11.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:11.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:12 compute-0 ceph-mon[74318]: pgmap v1756: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:12 compute-0 podman[283277]: 2026-01-22 00:16:12.955613808 +0000 UTC m=+0.065849931 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 22 00:16:13 compute-0 sudo[283297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:16:13 compute-0 sudo[283297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:13 compute-0 sudo[283297]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:13 compute-0 sudo[283322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:16:13 compute-0 sudo[283322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:13 compute-0 sudo[283322]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:13 compute-0 sudo[283347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:16:13 compute-0 sudo[283347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:13 compute-0 sudo[283347]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:13 compute-0 sudo[283372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 00:16:13 compute-0 sudo[283372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:13.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:13 compute-0 sudo[283372]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:16:13 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:16:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:16:13 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:16:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 22 00:16:13 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:16:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 22 00:16:13 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:16:13 compute-0 sudo[283417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:16:13 compute-0 sudo[283417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:13 compute-0 sudo[283417]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:13 compute-0 sudo[283442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:16:13 compute-0 sudo[283442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:13 compute-0 sudo[283442]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:13 compute-0 sudo[283467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:16:13 compute-0 sudo[283467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:13 compute-0 sudo[283467]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:13 compute-0 sudo[283492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 00:16:13 compute-0 sudo[283492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:13.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:16:14 compute-0 sudo[283492]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:16:14 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:16:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:16:14 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:16:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:16:14 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:16:14 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 628c21ab-9d27-4598-8773-446c4f557d08 does not exist
Jan 22 00:16:14 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev bcd70a10-650f-4ce4-9ead-d8c67af36bc4 does not exist
Jan 22 00:16:14 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 3535617d-8568-42c0-982e-ce3a9ce3ee65 does not exist
Jan 22 00:16:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:16:14 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:16:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:16:14 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:16:14 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:16:14 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:16:14 compute-0 sudo[283548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:16:14 compute-0 sudo[283548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:14 compute-0 sudo[283548]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:14 compute-0 ceph-mon[74318]: pgmap v1757: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:16:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:16:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:16:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:16:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:16:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:16:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:16:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:16:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:16:14 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:16:14 compute-0 sudo[283573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:16:14 compute-0 sudo[283573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:14 compute-0 sudo[283573]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:14 compute-0 sudo[283598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:16:14 compute-0 sudo[283598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:14 compute-0 sudo[283598]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:14 compute-0 sudo[283623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:16:14 compute-0 sudo[283623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:15 compute-0 podman[283688]: 2026-01-22 00:16:15.152906952 +0000 UTC m=+0.053029645 container create 8293cc7e4e0401e40c4f1552d447ab1879eefc6790e4aa8d1063277d8cf295be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 00:16:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:15 compute-0 systemd[1]: Started libpod-conmon-8293cc7e4e0401e40c4f1552d447ab1879eefc6790e4aa8d1063277d8cf295be.scope.
Jan 22 00:16:15 compute-0 podman[283688]: 2026-01-22 00:16:15.125964737 +0000 UTC m=+0.026087520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:16:15 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:16:15 compute-0 podman[283688]: 2026-01-22 00:16:15.246517855 +0000 UTC m=+0.146640598 container init 8293cc7e4e0401e40c4f1552d447ab1879eefc6790e4aa8d1063277d8cf295be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_morse, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 00:16:15 compute-0 podman[283688]: 2026-01-22 00:16:15.254193383 +0000 UTC m=+0.154316096 container start 8293cc7e4e0401e40c4f1552d447ab1879eefc6790e4aa8d1063277d8cf295be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_morse, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 00:16:15 compute-0 podman[283688]: 2026-01-22 00:16:15.258089514 +0000 UTC m=+0.158212237 container attach 8293cc7e4e0401e40c4f1552d447ab1879eefc6790e4aa8d1063277d8cf295be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 00:16:15 compute-0 focused_morse[283704]: 167 167
Jan 22 00:16:15 compute-0 systemd[1]: libpod-8293cc7e4e0401e40c4f1552d447ab1879eefc6790e4aa8d1063277d8cf295be.scope: Deactivated successfully.
Jan 22 00:16:15 compute-0 conmon[283704]: conmon 8293cc7e4e0401e40c4f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8293cc7e4e0401e40c4f1552d447ab1879eefc6790e4aa8d1063277d8cf295be.scope/container/memory.events
Jan 22 00:16:15 compute-0 podman[283688]: 2026-01-22 00:16:15.262478399 +0000 UTC m=+0.162601102 container died 8293cc7e4e0401e40c4f1552d447ab1879eefc6790e4aa8d1063277d8cf295be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:16:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-cec462b242bc2d69eb9b1e81ec24f2cd96c81e6677a517ab50555abac58c03f8-merged.mount: Deactivated successfully.
Jan 22 00:16:15 compute-0 podman[283688]: 2026-01-22 00:16:15.31118823 +0000 UTC m=+0.211310963 container remove 8293cc7e4e0401e40c4f1552d447ab1879eefc6790e4aa8d1063277d8cf295be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:16:15 compute-0 systemd[1]: libpod-conmon-8293cc7e4e0401e40c4f1552d447ab1879eefc6790e4aa8d1063277d8cf295be.scope: Deactivated successfully.
Jan 22 00:16:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:15.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:16:15 compute-0 podman[283727]: 2026-01-22 00:16:15.499123027 +0000 UTC m=+0.045089279 container create 857109654de00934e6ec6ca7bbb3347a2a234f7a50cda55502f03eb7909d6e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:16:15 compute-0 systemd[1]: Started libpod-conmon-857109654de00934e6ec6ca7bbb3347a2a234f7a50cda55502f03eb7909d6e2e.scope.
Jan 22 00:16:15 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783760d72faaa229d6182f80b320f8201ac59606c8824c20de368a6ffcfd742e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783760d72faaa229d6182f80b320f8201ac59606c8824c20de368a6ffcfd742e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783760d72faaa229d6182f80b320f8201ac59606c8824c20de368a6ffcfd742e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783760d72faaa229d6182f80b320f8201ac59606c8824c20de368a6ffcfd742e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783760d72faaa229d6182f80b320f8201ac59606c8824c20de368a6ffcfd742e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:16:15 compute-0 podman[283727]: 2026-01-22 00:16:15.571696267 +0000 UTC m=+0.117662459 container init 857109654de00934e6ec6ca7bbb3347a2a234f7a50cda55502f03eb7909d6e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_roentgen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:16:15 compute-0 podman[283727]: 2026-01-22 00:16:15.482825491 +0000 UTC m=+0.028791683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:16:15 compute-0 podman[283727]: 2026-01-22 00:16:15.579780807 +0000 UTC m=+0.125746979 container start 857109654de00934e6ec6ca7bbb3347a2a234f7a50cda55502f03eb7909d6e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:16:15 compute-0 podman[283727]: 2026-01-22 00:16:15.583795941 +0000 UTC m=+0.129762113 container attach 857109654de00934e6ec6ca7bbb3347a2a234f7a50cda55502f03eb7909d6e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 00:16:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:15.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:16 compute-0 sharp_roentgen[283745]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:16:16 compute-0 sharp_roentgen[283745]: --> relative data size: 1.0
Jan 22 00:16:16 compute-0 sharp_roentgen[283745]: --> All data devices are unavailable
Jan 22 00:16:16 compute-0 systemd[1]: libpod-857109654de00934e6ec6ca7bbb3347a2a234f7a50cda55502f03eb7909d6e2e.scope: Deactivated successfully.
Jan 22 00:16:16 compute-0 podman[283727]: 2026-01-22 00:16:16.458060057 +0000 UTC m=+1.004026219 container died 857109654de00934e6ec6ca7bbb3347a2a234f7a50cda55502f03eb7909d6e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:16:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-783760d72faaa229d6182f80b320f8201ac59606c8824c20de368a6ffcfd742e-merged.mount: Deactivated successfully.
Jan 22 00:16:16 compute-0 podman[283727]: 2026-01-22 00:16:16.532913597 +0000 UTC m=+1.078879779 container remove 857109654de00934e6ec6ca7bbb3347a2a234f7a50cda55502f03eb7909d6e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_roentgen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:16:16 compute-0 systemd[1]: libpod-conmon-857109654de00934e6ec6ca7bbb3347a2a234f7a50cda55502f03eb7909d6e2e.scope: Deactivated successfully.
Jan 22 00:16:16 compute-0 sudo[283623]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:16 compute-0 sudo[283772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:16:16 compute-0 sudo[283772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:16 compute-0 sudo[283772]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:16 compute-0 ceph-mon[74318]: pgmap v1758: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:16 compute-0 sudo[283797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:16:16 compute-0 sudo[283797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:16 compute-0 sudo[283797]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:16 compute-0 sudo[283822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:16:16 compute-0 sudo[283822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:16 compute-0 sudo[283822]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:16 compute-0 sudo[283847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:16:16 compute-0 sudo[283847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:17 compute-0 podman[283914]: 2026-01-22 00:16:17.28515943 +0000 UTC m=+0.058700691 container create 72631524fff3f8ac77fbf83d81ff54d8aa9b7ee34be34e223c11064ef284f8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:16:17 compute-0 systemd[1]: Started libpod-conmon-72631524fff3f8ac77fbf83d81ff54d8aa9b7ee34be34e223c11064ef284f8e8.scope.
Jan 22 00:16:17 compute-0 podman[283914]: 2026-01-22 00:16:17.256820922 +0000 UTC m=+0.030362263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:16:17 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:16:17 compute-0 podman[283914]: 2026-01-22 00:16:17.389191176 +0000 UTC m=+0.162732477 container init 72631524fff3f8ac77fbf83d81ff54d8aa9b7ee34be34e223c11064ef284f8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kepler, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 00:16:17 compute-0 podman[283914]: 2026-01-22 00:16:17.401968951 +0000 UTC m=+0.175510232 container start 72631524fff3f8ac77fbf83d81ff54d8aa9b7ee34be34e223c11064ef284f8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 00:16:17 compute-0 podman[283914]: 2026-01-22 00:16:17.406101289 +0000 UTC m=+0.179642640 container attach 72631524fff3f8ac77fbf83d81ff54d8aa9b7ee34be34e223c11064ef284f8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:16:17 compute-0 elastic_kepler[283930]: 167 167
Jan 22 00:16:17 compute-0 systemd[1]: libpod-72631524fff3f8ac77fbf83d81ff54d8aa9b7ee34be34e223c11064ef284f8e8.scope: Deactivated successfully.
Jan 22 00:16:17 compute-0 podman[283914]: 2026-01-22 00:16:17.409304018 +0000 UTC m=+0.182845309 container died 72631524fff3f8ac77fbf83d81ff54d8aa9b7ee34be34e223c11064ef284f8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:16:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:17.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a27d3782f07a315a56242fdd1e25830b1d972544d86e65ea74e931cc06a382c-merged.mount: Deactivated successfully.
Jan 22 00:16:17 compute-0 podman[283914]: 2026-01-22 00:16:17.462641762 +0000 UTC m=+0.236183053 container remove 72631524fff3f8ac77fbf83d81ff54d8aa9b7ee34be34e223c11064ef284f8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 00:16:17 compute-0 systemd[1]: libpod-conmon-72631524fff3f8ac77fbf83d81ff54d8aa9b7ee34be34e223c11064ef284f8e8.scope: Deactivated successfully.
Jan 22 00:16:17 compute-0 podman[283956]: 2026-01-22 00:16:17.680806566 +0000 UTC m=+0.060658461 container create 456989a423b99c044f9ee7f3626ce103b8a5f27137a6e6478b2694f4efb72540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:16:17 compute-0 systemd[1]: Started libpod-conmon-456989a423b99c044f9ee7f3626ce103b8a5f27137a6e6478b2694f4efb72540.scope.
Jan 22 00:16:17 compute-0 podman[283956]: 2026-01-22 00:16:17.657902307 +0000 UTC m=+0.037754182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:16:17 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:16:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a77fdcbb10bd289d66a0f0247b86493e0694fbef6b4e34a90eeed36d6f2ca4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:16:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a77fdcbb10bd289d66a0f0247b86493e0694fbef6b4e34a90eeed36d6f2ca4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:16:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a77fdcbb10bd289d66a0f0247b86493e0694fbef6b4e34a90eeed36d6f2ca4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:16:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a77fdcbb10bd289d66a0f0247b86493e0694fbef6b4e34a90eeed36d6f2ca4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:16:17 compute-0 podman[283956]: 2026-01-22 00:16:17.783552382 +0000 UTC m=+0.163404287 container init 456989a423b99c044f9ee7f3626ce103b8a5f27137a6e6478b2694f4efb72540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:16:17 compute-0 podman[283956]: 2026-01-22 00:16:17.792013755 +0000 UTC m=+0.171865640 container start 456989a423b99c044f9ee7f3626ce103b8a5f27137a6e6478b2694f4efb72540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shockley, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 00:16:17 compute-0 podman[283956]: 2026-01-22 00:16:17.795327076 +0000 UTC m=+0.175178961 container attach 456989a423b99c044f9ee7f3626ce103b8a5f27137a6e6478b2694f4efb72540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shockley, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 00:16:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:17.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:16:18 compute-0 interesting_shockley[283972]: {
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:     "1": [
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:         {
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:             "devices": [
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:                 "/dev/loop3"
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:             ],
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:             "lv_name": "ceph_lv0",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:             "lv_size": "7511998464",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:             "name": "ceph_lv0",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:             "tags": {
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:                 "ceph.cluster_name": "ceph",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:                 "ceph.crush_device_class": "",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:                 "ceph.encrypted": "0",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:                 "ceph.osd_id": "1",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:                 "ceph.type": "block",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:                 "ceph.vdo": "0"
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:             },
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:             "type": "block",
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:             "vg_name": "ceph_vg0"
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:         }
Jan 22 00:16:18 compute-0 interesting_shockley[283972]:     ]
Jan 22 00:16:18 compute-0 interesting_shockley[283972]: }
Jan 22 00:16:18 compute-0 systemd[1]: libpod-456989a423b99c044f9ee7f3626ce103b8a5f27137a6e6478b2694f4efb72540.scope: Deactivated successfully.
Jan 22 00:16:18 compute-0 podman[283956]: 2026-01-22 00:16:18.617383374 +0000 UTC m=+0.997235259 container died 456989a423b99c044f9ee7f3626ce103b8a5f27137a6e6478b2694f4efb72540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shockley, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 00:16:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9a77fdcbb10bd289d66a0f0247b86493e0694fbef6b4e34a90eeed36d6f2ca4-merged.mount: Deactivated successfully.
Jan 22 00:16:18 compute-0 podman[283956]: 2026-01-22 00:16:18.682192553 +0000 UTC m=+1.062044448 container remove 456989a423b99c044f9ee7f3626ce103b8a5f27137a6e6478b2694f4efb72540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shockley, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:16:18 compute-0 systemd[1]: libpod-conmon-456989a423b99c044f9ee7f3626ce103b8a5f27137a6e6478b2694f4efb72540.scope: Deactivated successfully.
Jan 22 00:16:18 compute-0 ceph-mon[74318]: pgmap v1759: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:18 compute-0 sudo[283847]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:18 compute-0 sudo[283992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:16:18 compute-0 sudo[283992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:18 compute-0 sudo[283992]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:16:18 compute-0 sudo[284017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:16:18 compute-0 sudo[284017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:18 compute-0 sudo[284017]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:19 compute-0 sudo[284042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:16:19 compute-0 sudo[284042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:19 compute-0 sudo[284042]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:19 compute-0 sudo[284067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:16:19 compute-0 sudo[284067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:19.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:19 compute-0 podman[284133]: 2026-01-22 00:16:19.552740733 +0000 UTC m=+0.066259106 container create e03e2599f1ab9f8998eced4924d3d59eb16db6464c971bff808e2b1c3623545f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_taussig, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 00:16:19 compute-0 systemd[1]: Started libpod-conmon-e03e2599f1ab9f8998eced4924d3d59eb16db6464c971bff808e2b1c3623545f.scope.
Jan 22 00:16:19 compute-0 podman[284133]: 2026-01-22 00:16:19.528719458 +0000 UTC m=+0.042237841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:16:19 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:16:19 compute-0 podman[284133]: 2026-01-22 00:16:19.651879946 +0000 UTC m=+0.165398329 container init e03e2599f1ab9f8998eced4924d3d59eb16db6464c971bff808e2b1c3623545f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_taussig, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 00:16:19 compute-0 podman[284133]: 2026-01-22 00:16:19.664356234 +0000 UTC m=+0.177874577 container start e03e2599f1ab9f8998eced4924d3d59eb16db6464c971bff808e2b1c3623545f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_taussig, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:16:19 compute-0 podman[284133]: 2026-01-22 00:16:19.668791661 +0000 UTC m=+0.182310084 container attach e03e2599f1ab9f8998eced4924d3d59eb16db6464c971bff808e2b1c3623545f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_taussig, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 22 00:16:19 compute-0 vibrant_taussig[284150]: 167 167
Jan 22 00:16:19 compute-0 systemd[1]: libpod-e03e2599f1ab9f8998eced4924d3d59eb16db6464c971bff808e2b1c3623545f.scope: Deactivated successfully.
Jan 22 00:16:19 compute-0 podman[284133]: 2026-01-22 00:16:19.670874695 +0000 UTC m=+0.184393068 container died e03e2599f1ab9f8998eced4924d3d59eb16db6464c971bff808e2b1c3623545f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_taussig, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 00:16:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9817ec9a0d996828e3debdf4879c72e84e3eb65b89ff7845f9fdaf5f4c8950ea-merged.mount: Deactivated successfully.
Jan 22 00:16:19 compute-0 podman[284133]: 2026-01-22 00:16:19.729193364 +0000 UTC m=+0.242711707 container remove e03e2599f1ab9f8998eced4924d3d59eb16db6464c971bff808e2b1c3623545f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_taussig, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:16:19 compute-0 systemd[1]: libpod-conmon-e03e2599f1ab9f8998eced4924d3d59eb16db6464c971bff808e2b1c3623545f.scope: Deactivated successfully.
Jan 22 00:16:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:19.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:19 compute-0 podman[284173]: 2026-01-22 00:16:19.967207373 +0000 UTC m=+0.071820988 container create 28492fbe2de739ad424af8a4aad302b4c3352a8a87b3bd77c6ff63f1c2ea1744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khorana, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:16:20 compute-0 systemd[1]: Started libpod-conmon-28492fbe2de739ad424af8a4aad302b4c3352a8a87b3bd77c6ff63f1c2ea1744.scope.
Jan 22 00:16:20 compute-0 podman[284173]: 2026-01-22 00:16:19.938227564 +0000 UTC m=+0.042841269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:16:20 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c88aaae4e924cac862ed8902eb34e1bcd0666d1d213bea1a5d1f8e4dc17cd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c88aaae4e924cac862ed8902eb34e1bcd0666d1d213bea1a5d1f8e4dc17cd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c88aaae4e924cac862ed8902eb34e1bcd0666d1d213bea1a5d1f8e4dc17cd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c88aaae4e924cac862ed8902eb34e1bcd0666d1d213bea1a5d1f8e4dc17cd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:16:20 compute-0 podman[284173]: 2026-01-22 00:16:20.092144496 +0000 UTC m=+0.196758191 container init 28492fbe2de739ad424af8a4aad302b4c3352a8a87b3bd77c6ff63f1c2ea1744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 00:16:20 compute-0 podman[284173]: 2026-01-22 00:16:20.100140984 +0000 UTC m=+0.204754599 container start 28492fbe2de739ad424af8a4aad302b4c3352a8a87b3bd77c6ff63f1c2ea1744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:16:20 compute-0 podman[284173]: 2026-01-22 00:16:20.104291143 +0000 UTC m=+0.208904828 container attach 28492fbe2de739ad424af8a4aad302b4c3352a8a87b3bd77c6ff63f1c2ea1744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khorana, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:16:20 compute-0 sudo[284194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:16:20 compute-0 sudo[284194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:20 compute-0 sudo[284194]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:20 compute-0 sudo[284219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:16:20 compute-0 sudo[284219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:20 compute-0 sudo[284219]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:20 compute-0 ceph-mon[74318]: pgmap v1760: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:21 compute-0 cranky_khorana[284189]: {
Jan 22 00:16:21 compute-0 cranky_khorana[284189]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:16:21 compute-0 cranky_khorana[284189]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:16:21 compute-0 cranky_khorana[284189]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:16:21 compute-0 cranky_khorana[284189]:         "osd_id": 1,
Jan 22 00:16:21 compute-0 cranky_khorana[284189]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:16:21 compute-0 cranky_khorana[284189]:         "type": "bluestore"
Jan 22 00:16:21 compute-0 cranky_khorana[284189]:     }
Jan 22 00:16:21 compute-0 cranky_khorana[284189]: }
Jan 22 00:16:21 compute-0 systemd[1]: libpod-28492fbe2de739ad424af8a4aad302b4c3352a8a87b3bd77c6ff63f1c2ea1744.scope: Deactivated successfully.
Jan 22 00:16:21 compute-0 podman[284173]: 2026-01-22 00:16:21.048064454 +0000 UTC m=+1.152678089 container died 28492fbe2de739ad424af8a4aad302b4c3352a8a87b3bd77c6ff63f1c2ea1744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:16:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-79c88aaae4e924cac862ed8902eb34e1bcd0666d1d213bea1a5d1f8e4dc17cd4-merged.mount: Deactivated successfully.
Jan 22 00:16:21 compute-0 podman[284173]: 2026-01-22 00:16:21.109382725 +0000 UTC m=+1.213996330 container remove 28492fbe2de739ad424af8a4aad302b4c3352a8a87b3bd77c6ff63f1c2ea1744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:16:21 compute-0 systemd[1]: libpod-conmon-28492fbe2de739ad424af8a4aad302b4c3352a8a87b3bd77c6ff63f1c2ea1744.scope: Deactivated successfully.
Jan 22 00:16:21 compute-0 sudo[284067]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:21 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:16:21 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:16:21 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:16:21 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:16:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:21 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev ddd1fa39-2006-4574-a511-ce184cd50867 does not exist
Jan 22 00:16:21 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 581269b7-7fd3-42e0-9cdf-8c13982d8c13 does not exist
Jan 22 00:16:21 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 6bb6be4b-cf90-4a9a-9bc6-c5664719e6c5 does not exist
Jan 22 00:16:21 compute-0 sudo[284276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:16:21 compute-0 sudo[284276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:21 compute-0 sudo[284276]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:21 compute-0 sudo[284301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:16:21 compute-0 sudo[284301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:21 compute-0 sudo[284301]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.003000094s ======
Jan 22 00:16:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:21.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000094s
Jan 22 00:16:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:21.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:22 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:16:22 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:16:22 compute-0 ceph-mon[74318]: pgmap v1761: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:16:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:23.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:16:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:23.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:16:24 compute-0 ceph-mon[74318]: pgmap v1762: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1020137519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:16:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:25.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:25.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:16:26 compute-0 ceph-mon[74318]: pgmap v1763: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2691876606' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:16:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2691876606' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:16:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1986572337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:16:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:16:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:27.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:16:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:27.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:28 compute-0 nova_compute[247516]: 2026-01-22 00:16:27.996 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:16:28 compute-0 nova_compute[247516]: 2026-01-22 00:16:27.998 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:16:28 compute-0 nova_compute[247516]: 2026-01-22 00:16:27.998 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:16:28 compute-0 nova_compute[247516]: 2026-01-22 00:16:28.011 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:16:28 compute-0 ceph-mon[74318]: pgmap v1764: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:16:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:16:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:29.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:16:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:29.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:30 compute-0 ceph-mon[74318]: pgmap v1765: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:31.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:16:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:31.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:16:31 compute-0 nova_compute[247516]: 2026-01-22 00:16:31.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:16:31 compute-0 nova_compute[247516]: 2026-01-22 00:16:31.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:16:32 compute-0 ceph-mon[74318]: pgmap v1766: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:32 compute-0 nova_compute[247516]: 2026-01-22 00:16:32.988 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:16:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:33.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:33 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3814211599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:16:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:16:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:33.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:16:34 compute-0 ceph-mon[74318]: pgmap v1767: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:34 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2502774675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:16:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:35.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:16:35 compute-0 nova_compute[247516]: 2026-01-22 00:16:35.604 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:16:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:35.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:36 compute-0 podman[284334]: 2026-01-22 00:16:36.038995255 +0000 UTC m=+0.138156994 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 00:16:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:37 compute-0 ceph-mon[74318]: pgmap v1768: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:37.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:37.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:38 compute-0 nova_compute[247516]: 2026-01-22 00:16:38.011 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:16:38 compute-0 nova_compute[247516]: 2026-01-22 00:16:38.012 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:16:38 compute-0 ceph-mon[74318]: pgmap v1769: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:16:39
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['vms', 'default.rgw.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.data']
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:16:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:39.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:16:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:16:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:39.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:39 compute-0 nova_compute[247516]: 2026-01-22 00:16:39.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:16:40 compute-0 sudo[284363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:16:40 compute-0 sudo[284363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:40 compute-0 sudo[284363]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:40 compute-0 sudo[284388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:16:40 compute-0 sudo[284388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:16:40 compute-0 sudo[284388]: pam_unix(sudo:session): session closed for user root
Jan 22 00:16:40 compute-0 ceph-mon[74318]: pgmap v1770: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:41.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:41.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:41 compute-0 nova_compute[247516]: 2026-01-22 00:16:41.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:16:41 compute-0 nova_compute[247516]: 2026-01-22 00:16:41.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:16:42 compute-0 nova_compute[247516]: 2026-01-22 00:16:42.028 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:16:42 compute-0 nova_compute[247516]: 2026-01-22 00:16:42.029 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:16:42 compute-0 nova_compute[247516]: 2026-01-22 00:16:42.029 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:16:42 compute-0 nova_compute[247516]: 2026-01-22 00:16:42.029 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:16:42 compute-0 nova_compute[247516]: 2026-01-22 00:16:42.030 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:16:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:16:42 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2821472035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:16:42 compute-0 nova_compute[247516]: 2026-01-22 00:16:42.505 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:16:42 compute-0 nova_compute[247516]: 2026-01-22 00:16:42.696 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:16:42 compute-0 nova_compute[247516]: 2026-01-22 00:16:42.698 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5136MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:16:42 compute-0 nova_compute[247516]: 2026-01-22 00:16:42.698 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:16:42 compute-0 nova_compute[247516]: 2026-01-22 00:16:42.699 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:16:42 compute-0 nova_compute[247516]: 2026-01-22 00:16:42.788 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:16:42 compute-0 nova_compute[247516]: 2026-01-22 00:16:42.788 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:16:42 compute-0 nova_compute[247516]: 2026-01-22 00:16:42.788 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:16:42 compute-0 nova_compute[247516]: 2026-01-22 00:16:42.823 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:16:42 compute-0 ceph-mon[74318]: pgmap v1771: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:42 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2821472035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:16:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:16:43 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2711621364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:16:43 compute-0 nova_compute[247516]: 2026-01-22 00:16:43.304 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:16:43 compute-0 nova_compute[247516]: 2026-01-22 00:16:43.313 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:16:43 compute-0 nova_compute[247516]: 2026-01-22 00:16:43.335 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:16:43 compute-0 nova_compute[247516]: 2026-01-22 00:16:43.338 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:16:43 compute-0 nova_compute[247516]: 2026-01-22 00:16:43.339 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:16:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:43.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:43 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2711621364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:16:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:16:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:43.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:43 compute-0 podman[284459]: 2026-01-22 00:16:43.944507504 +0000 UTC m=+0.063017374 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:16:44 compute-0 nova_compute[247516]: 2026-01-22 00:16:44.340 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:16:44 compute-0 ceph-mon[74318]: pgmap v1772: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:45.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:45.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:47 compute-0 ceph-mon[74318]: pgmap v1773: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:47.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 22 00:16:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 22 00:16:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 22 00:16:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 22 00:16:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:47.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 22 00:16:47 compute-0 radosgw[92982]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 22 00:16:48 compute-0 ceph-mon[74318]: pgmap v1774: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:16:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:16:48.778 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:16:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:16:48.781 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:16:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:16:48.781 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:16:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:16:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
Jan 22 00:16:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:49.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:16:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:49.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:50 compute-0 ceph-mon[74318]: pgmap v1775: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
Jan 22 00:16:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 46 KiB/s rd, 0 B/s wr, 77 op/s
Jan 22 00:16:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:51.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:51.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:16:52 compute-0 ceph-mon[74318]: pgmap v1776: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 46 KiB/s rd, 0 B/s wr, 77 op/s
Jan 22 00:16:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 75 KiB/s rd, 0 B/s wr, 125 op/s
Jan 22 00:16:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:16:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:53.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:16:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:16:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:53.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:16:54 compute-0 ceph-mon[74318]: pgmap v1777: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 75 KiB/s rd, 0 B/s wr, 125 op/s
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:16:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:16:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 104 KiB/s rd, 0 B/s wr, 173 op/s
Jan 22 00:16:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:55.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:16:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:55.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:16:56 compute-0 ceph-mon[74318]: pgmap v1778: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 104 KiB/s rd, 0 B/s wr, 173 op/s
Jan 22 00:16:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 104 KiB/s rd, 0 B/s wr, 173 op/s
Jan 22 00:16:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:57.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:16:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:57.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:16:58 compute-0 ceph-mon[74318]: pgmap v1779: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 104 KiB/s rd, 0 B/s wr, 173 op/s
Jan 22 00:16:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:16:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 104 KiB/s rd, 0 B/s wr, 173 op/s
Jan 22 00:16:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:16:59.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:16:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:16:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:16:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:16:59.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:17:00 compute-0 ceph-mon[74318]: pgmap v1780: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 104 KiB/s rd, 0 B/s wr, 173 op/s
Jan 22 00:17:00 compute-0 sudo[284486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:17:00 compute-0 sudo[284486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:00 compute-0 sudo[284486]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:00 compute-0 sudo[284511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:17:00 compute-0 sudo[284511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:00 compute-0 sudo[284511]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 96 KiB/s rd, 0 B/s wr, 160 op/s
Jan 22 00:17:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:17:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:01.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:17:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:01.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:02 compute-0 ceph-mon[74318]: pgmap v1781: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 96 KiB/s rd, 0 B/s wr, 160 op/s
Jan 22 00:17:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 0 B/s wr, 96 op/s
Jan 22 00:17:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:17:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:03.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:17:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:17:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:17:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:03.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:17:04 compute-0 ceph-mon[74318]: pgmap v1782: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 0 B/s wr, 96 op/s
Jan 22 00:17:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 0 B/s wr, 47 op/s
Jan 22 00:17:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:05.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:17:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:05.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:17:06 compute-0 ceph-mon[74318]: pgmap v1783: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 0 B/s wr, 47 op/s
Jan 22 00:17:07 compute-0 podman[284539]: 2026-01-22 00:17:07.055472006 +0000 UTC m=+0.163214331 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 00:17:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:17:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:07.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:17:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:17:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:07.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:17:08 compute-0 ceph-mon[74318]: pgmap v1784: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:17:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:17:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:17:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:17:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:17:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:17:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:17:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:09.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:17:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:09.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:17:10 compute-0 ceph-mon[74318]: pgmap v1785: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:11.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:11.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:12 compute-0 ceph-mon[74318]: pgmap v1786: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:13.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:17:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:17:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:13.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:17:14 compute-0 ceph-mon[74318]: pgmap v1787: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:14 compute-0 podman[284569]: 2026-01-22 00:17:14.975802162 +0000 UTC m=+0.082804274 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 00:17:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:15.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:15.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:16 compute-0 ceph-mon[74318]: pgmap v1788: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:17.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:17:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:17.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:17:18 compute-0 ceph-mon[74318]: pgmap v1789: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:17:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:19.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:19.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:20 compute-0 ceph-mon[74318]: pgmap v1790: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:20 compute-0 sudo[284592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:17:20 compute-0 sudo[284592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:20 compute-0 sudo[284592]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:21 compute-0 sudo[284617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:17:21 compute-0 sudo[284617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:21 compute-0 sudo[284617]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:17:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:21.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:17:21 compute-0 sudo[284643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:17:21 compute-0 sudo[284643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:21 compute-0 sudo[284643]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:21 compute-0 sudo[284668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:17:21 compute-0 sudo[284668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:21 compute-0 sudo[284668]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:21 compute-0 sudo[284693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:17:21 compute-0 sudo[284693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:21 compute-0 sudo[284693]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:21 compute-0 sudo[284718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 00:17:21 compute-0 sudo[284718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:21.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 22 00:17:22 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:17:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 22 00:17:22 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:17:22 compute-0 sudo[284718]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 22 00:17:22 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 00:17:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 22 00:17:22 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 00:17:23 compute-0 ceph-mon[74318]: pgmap v1791: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:17:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:17:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 00:17:23 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 00:17:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:17:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:17:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:17:23 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:17:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:17:23 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:17:23 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 944efa36-a709-4c39-b75e-1b071eb95eaa does not exist
Jan 22 00:17:23 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev e15a4d55-0172-4d06-88ec-c3750cba55ea does not exist
Jan 22 00:17:23 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 147a07f6-b24e-460a-8f3a-55f20e292dfe does not exist
Jan 22 00:17:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:17:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:17:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:17:23 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:17:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:17:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:17:23 compute-0 sudo[284774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:17:23 compute-0 sudo[284774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:23 compute-0 sudo[284774]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:23.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:23 compute-0 sudo[284800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:17:23 compute-0 sudo[284800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:23 compute-0 sudo[284800]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:23 compute-0 sudo[284825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:17:23 compute-0 sudo[284825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:23 compute-0 sudo[284825]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:23 compute-0 sudo[284850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:17:23 compute-0 sudo[284850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:17:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:17:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:23.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:17:24 compute-0 podman[284915]: 2026-01-22 00:17:24.034945495 +0000 UTC m=+0.044437816 container create f4507729831ff166903149c097f95dd206f61cb30a3860a6541568e99bc31b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:17:24 compute-0 systemd[1]: Started libpod-conmon-f4507729831ff166903149c097f95dd206f61cb30a3860a6541568e99bc31b25.scope.
Jan 22 00:17:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:17:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:17:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:17:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:17:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:17:24 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:17:24 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:17:24 compute-0 podman[284915]: 2026-01-22 00:17:24.017574418 +0000 UTC m=+0.027066769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:17:24 compute-0 podman[284915]: 2026-01-22 00:17:24.127889602 +0000 UTC m=+0.137381943 container init f4507729831ff166903149c097f95dd206f61cb30a3860a6541568e99bc31b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 00:17:24 compute-0 podman[284915]: 2026-01-22 00:17:24.140956736 +0000 UTC m=+0.150449097 container start f4507729831ff166903149c097f95dd206f61cb30a3860a6541568e99bc31b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:17:24 compute-0 podman[284915]: 2026-01-22 00:17:24.14528518 +0000 UTC m=+0.154777541 container attach f4507729831ff166903149c097f95dd206f61cb30a3860a6541568e99bc31b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 00:17:24 compute-0 eloquent_ishizaka[284932]: 167 167
Jan 22 00:17:24 compute-0 systemd[1]: libpod-f4507729831ff166903149c097f95dd206f61cb30a3860a6541568e99bc31b25.scope: Deactivated successfully.
Jan 22 00:17:24 compute-0 podman[284915]: 2026-01-22 00:17:24.151301946 +0000 UTC m=+0.160794307 container died f4507729831ff166903149c097f95dd206f61cb30a3860a6541568e99bc31b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 00:17:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebe1cf481dbde6eeed8e8b77e638b6014a648d5e4dace00d960f2a111d9f5247-merged.mount: Deactivated successfully.
Jan 22 00:17:24 compute-0 podman[284915]: 2026-01-22 00:17:24.207950629 +0000 UTC m=+0.217442980 container remove f4507729831ff166903149c097f95dd206f61cb30a3860a6541568e99bc31b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 00:17:24 compute-0 systemd[1]: libpod-conmon-f4507729831ff166903149c097f95dd206f61cb30a3860a6541568e99bc31b25.scope: Deactivated successfully.
Jan 22 00:17:24 compute-0 podman[284956]: 2026-01-22 00:17:24.423259213 +0000 UTC m=+0.043625471 container create 7010fb23a02dca52564abc458eb97e434794aa63e8183922d475cbad4df909ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 00:17:24 compute-0 systemd[1]: Started libpod-conmon-7010fb23a02dca52564abc458eb97e434794aa63e8183922d475cbad4df909ad.scope.
Jan 22 00:17:24 compute-0 podman[284956]: 2026-01-22 00:17:24.407873167 +0000 UTC m=+0.028239455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:17:24 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f65df10a4b5040db9860a83cddfcac8a0ad83de04b1bf2188728ae57f700953/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f65df10a4b5040db9860a83cddfcac8a0ad83de04b1bf2188728ae57f700953/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f65df10a4b5040db9860a83cddfcac8a0ad83de04b1bf2188728ae57f700953/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f65df10a4b5040db9860a83cddfcac8a0ad83de04b1bf2188728ae57f700953/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f65df10a4b5040db9860a83cddfcac8a0ad83de04b1bf2188728ae57f700953/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:17:24 compute-0 podman[284956]: 2026-01-22 00:17:24.534709952 +0000 UTC m=+0.155076290 container init 7010fb23a02dca52564abc458eb97e434794aa63e8183922d475cbad4df909ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:17:24 compute-0 podman[284956]: 2026-01-22 00:17:24.546997783 +0000 UTC m=+0.167364081 container start 7010fb23a02dca52564abc458eb97e434794aa63e8183922d475cbad4df909ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:17:24 compute-0 podman[284956]: 2026-01-22 00:17:24.551842272 +0000 UTC m=+0.172208570 container attach 7010fb23a02dca52564abc458eb97e434794aa63e8183922d475cbad4df909ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sanderson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 00:17:25 compute-0 ceph-mon[74318]: pgmap v1792: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2564558879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:17:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:25 compute-0 goofy_sanderson[284972]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:17:25 compute-0 goofy_sanderson[284972]: --> relative data size: 1.0
Jan 22 00:17:25 compute-0 goofy_sanderson[284972]: --> All data devices are unavailable
Jan 22 00:17:25 compute-0 systemd[1]: libpod-7010fb23a02dca52564abc458eb97e434794aa63e8183922d475cbad4df909ad.scope: Deactivated successfully.
Jan 22 00:17:25 compute-0 podman[284956]: 2026-01-22 00:17:25.386667429 +0000 UTC m=+1.007033727 container died 7010fb23a02dca52564abc458eb97e434794aa63e8183922d475cbad4df909ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 22 00:17:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f65df10a4b5040db9860a83cddfcac8a0ad83de04b1bf2188728ae57f700953-merged.mount: Deactivated successfully.
Jan 22 00:17:25 compute-0 podman[284956]: 2026-01-22 00:17:25.448261565 +0000 UTC m=+1.068627823 container remove 7010fb23a02dca52564abc458eb97e434794aa63e8183922d475cbad4df909ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sanderson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:17:25 compute-0 systemd[1]: libpod-conmon-7010fb23a02dca52564abc458eb97e434794aa63e8183922d475cbad4df909ad.scope: Deactivated successfully.
Jan 22 00:17:25 compute-0 sudo[284850]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:17:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:25.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:17:25 compute-0 sudo[285002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:17:25 compute-0 sudo[285002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:25 compute-0 sudo[285002]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:25 compute-0 sudo[285027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:17:25 compute-0 sudo[285027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:25 compute-0 sudo[285027]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:25 compute-0 sudo[285052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:17:25 compute-0 sudo[285052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:25 compute-0 sudo[285052]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:25 compute-0 sudo[285077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:17:25 compute-0 sudo[285077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:17:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:25.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:17:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/956067155' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:17:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/956067155' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:17:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2122539701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:17:26 compute-0 podman[285141]: 2026-01-22 00:17:26.233721603 +0000 UTC m=+0.060398150 container create 95fff1320b3deeb7fd0823d1d054edcf962f314b02fe565dfe00362e8db94ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 22 00:17:26 compute-0 systemd[1]: Started libpod-conmon-95fff1320b3deeb7fd0823d1d054edcf962f314b02fe565dfe00362e8db94ec2.scope.
Jan 22 00:17:26 compute-0 podman[285141]: 2026-01-22 00:17:26.202170967 +0000 UTC m=+0.028847574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:17:26 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:17:26 compute-0 podman[285141]: 2026-01-22 00:17:26.331723566 +0000 UTC m=+0.158400183 container init 95fff1320b3deeb7fd0823d1d054edcf962f314b02fe565dfe00362e8db94ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kalam, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 00:17:26 compute-0 podman[285141]: 2026-01-22 00:17:26.34249448 +0000 UTC m=+0.169171037 container start 95fff1320b3deeb7fd0823d1d054edcf962f314b02fe565dfe00362e8db94ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kalam, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:17:26 compute-0 podman[285141]: 2026-01-22 00:17:26.346344299 +0000 UTC m=+0.173020896 container attach 95fff1320b3deeb7fd0823d1d054edcf962f314b02fe565dfe00362e8db94ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 22 00:17:26 compute-0 unruffled_kalam[285157]: 167 167
Jan 22 00:17:26 compute-0 systemd[1]: libpod-95fff1320b3deeb7fd0823d1d054edcf962f314b02fe565dfe00362e8db94ec2.scope: Deactivated successfully.
Jan 22 00:17:26 compute-0 podman[285141]: 2026-01-22 00:17:26.350160357 +0000 UTC m=+0.176836904 container died 95fff1320b3deeb7fd0823d1d054edcf962f314b02fe565dfe00362e8db94ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kalam, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:17:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8a27b59ea96925d69a271eab8ac166f0d8c3db97f1e1e80540da2d6c8e93359-merged.mount: Deactivated successfully.
Jan 22 00:17:26 compute-0 podman[285141]: 2026-01-22 00:17:26.396925564 +0000 UTC m=+0.223602121 container remove 95fff1320b3deeb7fd0823d1d054edcf962f314b02fe565dfe00362e8db94ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 00:17:26 compute-0 systemd[1]: libpod-conmon-95fff1320b3deeb7fd0823d1d054edcf962f314b02fe565dfe00362e8db94ec2.scope: Deactivated successfully.
Jan 22 00:17:26 compute-0 podman[285180]: 2026-01-22 00:17:26.595067666 +0000 UTC m=+0.056587101 container create ebb54c30c5c4532cc9de90a2ce1d4412c952736d7d90c246ac3b88c565ec191d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ganguly, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:17:26 compute-0 systemd[1]: Started libpod-conmon-ebb54c30c5c4532cc9de90a2ce1d4412c952736d7d90c246ac3b88c565ec191d.scope.
Jan 22 00:17:26 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:17:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9122ee71a73335bf8f482c9451b9f9828ca4da0a75659020a71ceb6ff5d7c581/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:17:26 compute-0 podman[285180]: 2026-01-22 00:17:26.570590719 +0000 UTC m=+0.032110154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:17:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9122ee71a73335bf8f482c9451b9f9828ca4da0a75659020a71ceb6ff5d7c581/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:17:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9122ee71a73335bf8f482c9451b9f9828ca4da0a75659020a71ceb6ff5d7c581/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:17:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9122ee71a73335bf8f482c9451b9f9828ca4da0a75659020a71ceb6ff5d7c581/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:17:26 compute-0 podman[285180]: 2026-01-22 00:17:26.682667958 +0000 UTC m=+0.144187353 container init ebb54c30c5c4532cc9de90a2ce1d4412c952736d7d90c246ac3b88c565ec191d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ganguly, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 00:17:26 compute-0 podman[285180]: 2026-01-22 00:17:26.693872164 +0000 UTC m=+0.155391599 container start ebb54c30c5c4532cc9de90a2ce1d4412c952736d7d90c246ac3b88c565ec191d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ganguly, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 00:17:26 compute-0 podman[285180]: 2026-01-22 00:17:26.698245599 +0000 UTC m=+0.159765094 container attach ebb54c30c5c4532cc9de90a2ce1d4412c952736d7d90c246ac3b88c565ec191d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ganguly, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:17:27 compute-0 ceph-mon[74318]: pgmap v1793: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]: {
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:     "1": [
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:         {
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:             "devices": [
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:                 "/dev/loop3"
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:             ],
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:             "lv_name": "ceph_lv0",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:             "lv_size": "7511998464",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:             "name": "ceph_lv0",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:             "tags": {
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:                 "ceph.cluster_name": "ceph",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:                 "ceph.crush_device_class": "",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:                 "ceph.encrypted": "0",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:                 "ceph.osd_id": "1",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:                 "ceph.type": "block",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:                 "ceph.vdo": "0"
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:             },
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:             "type": "block",
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:             "vg_name": "ceph_vg0"
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:         }
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]:     ]
Jan 22 00:17:27 compute-0 elegant_ganguly[285197]: }
Jan 22 00:17:27 compute-0 systemd[1]: libpod-ebb54c30c5c4532cc9de90a2ce1d4412c952736d7d90c246ac3b88c565ec191d.scope: Deactivated successfully.
Jan 22 00:17:27 compute-0 podman[285180]: 2026-01-22 00:17:27.443879236 +0000 UTC m=+0.905398681 container died ebb54c30c5c4532cc9de90a2ce1d4412c952736d7d90c246ac3b88c565ec191d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ganguly, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Jan 22 00:17:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-9122ee71a73335bf8f482c9451b9f9828ca4da0a75659020a71ceb6ff5d7c581-merged.mount: Deactivated successfully.
Jan 22 00:17:27 compute-0 podman[285180]: 2026-01-22 00:17:27.508550347 +0000 UTC m=+0.970069742 container remove ebb54c30c5c4532cc9de90a2ce1d4412c952736d7d90c246ac3b88c565ec191d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ganguly, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:17:27 compute-0 systemd[1]: libpod-conmon-ebb54c30c5c4532cc9de90a2ce1d4412c952736d7d90c246ac3b88c565ec191d.scope: Deactivated successfully.
Jan 22 00:17:27 compute-0 sudo[285077]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:27.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:27 compute-0 sudo[285218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:17:27 compute-0 sudo[285218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:27 compute-0 sudo[285218]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:27 compute-0 sudo[285243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:17:27 compute-0 sudo[285243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:27 compute-0 sudo[285243]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:27 compute-0 sudo[285268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:17:27 compute-0 sudo[285268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:27 compute-0 sudo[285268]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:27 compute-0 sudo[285293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:17:27 compute-0 sudo[285293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:27.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:28 compute-0 ceph-mon[74318]: pgmap v1794: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:28 compute-0 podman[285358]: 2026-01-22 00:17:28.339794093 +0000 UTC m=+0.028724691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:17:28 compute-0 podman[285358]: 2026-01-22 00:17:28.695858752 +0000 UTC m=+0.384789350 container create 40998cfb3709f7afa4b3a797b0e3c6c7afd6975480d7e2f792558bb0af9559b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:17:28 compute-0 systemd[1]: Started libpod-conmon-40998cfb3709f7afa4b3a797b0e3c6c7afd6975480d7e2f792558bb0af9559b0.scope.
Jan 22 00:17:28 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:17:28 compute-0 podman[285358]: 2026-01-22 00:17:28.790385358 +0000 UTC m=+0.479316016 container init 40998cfb3709f7afa4b3a797b0e3c6c7afd6975480d7e2f792558bb0af9559b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 00:17:28 compute-0 podman[285358]: 2026-01-22 00:17:28.800895933 +0000 UTC m=+0.489826531 container start 40998cfb3709f7afa4b3a797b0e3c6c7afd6975480d7e2f792558bb0af9559b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:17:28 compute-0 podman[285358]: 2026-01-22 00:17:28.80499882 +0000 UTC m=+0.493929478 container attach 40998cfb3709f7afa4b3a797b0e3c6c7afd6975480d7e2f792558bb0af9559b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 00:17:28 compute-0 tender_babbage[285374]: 167 167
Jan 22 00:17:28 compute-0 systemd[1]: libpod-40998cfb3709f7afa4b3a797b0e3c6c7afd6975480d7e2f792558bb0af9559b0.scope: Deactivated successfully.
Jan 22 00:17:28 compute-0 podman[285358]: 2026-01-22 00:17:28.810201711 +0000 UTC m=+0.499132279 container died 40998cfb3709f7afa4b3a797b0e3c6c7afd6975480d7e2f792558bb0af9559b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 00:17:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9a104f2bda98e6ffa49d91ff8be28041007112e200ce5f6d15076d03d265bae-merged.mount: Deactivated successfully.
Jan 22 00:17:28 compute-0 podman[285358]: 2026-01-22 00:17:28.859972291 +0000 UTC m=+0.548902849 container remove 40998cfb3709f7afa4b3a797b0e3c6c7afd6975480d7e2f792558bb0af9559b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:17:28 compute-0 systemd[1]: libpod-conmon-40998cfb3709f7afa4b3a797b0e3c6c7afd6975480d7e2f792558bb0af9559b0.scope: Deactivated successfully.
Jan 22 00:17:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:17:29 compute-0 podman[285400]: 2026-01-22 00:17:29.09901851 +0000 UTC m=+0.071488624 container create 2c74d98da5848484f7c8a116934041fc8a742f548b7ea71ec456e0e1ab574a4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:17:29 compute-0 systemd[1]: Started libpod-conmon-2c74d98da5848484f7c8a116934041fc8a742f548b7ea71ec456e0e1ab574a4e.scope.
Jan 22 00:17:29 compute-0 podman[285400]: 2026-01-22 00:17:29.074079708 +0000 UTC m=+0.046549812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:17:29 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e4f7c38f951a014bc4fcb9086386745e68fb8387bf8fd455aa91432c28e9c0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e4f7c38f951a014bc4fcb9086386745e68fb8387bf8fd455aa91432c28e9c0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e4f7c38f951a014bc4fcb9086386745e68fb8387bf8fd455aa91432c28e9c0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e4f7c38f951a014bc4fcb9086386745e68fb8387bf8fd455aa91432c28e9c0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:17:29 compute-0 podman[285400]: 2026-01-22 00:17:29.189418768 +0000 UTC m=+0.161888842 container init 2c74d98da5848484f7c8a116934041fc8a742f548b7ea71ec456e0e1ab574a4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hypatia, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 00:17:29 compute-0 podman[285400]: 2026-01-22 00:17:29.198498558 +0000 UTC m=+0.170968682 container start 2c74d98da5848484f7c8a116934041fc8a742f548b7ea71ec456e0e1ab574a4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:17:29 compute-0 podman[285400]: 2026-01-22 00:17:29.203254696 +0000 UTC m=+0.175724860 container attach 2c74d98da5848484f7c8a116934041fc8a742f548b7ea71ec456e0e1ab574a4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hypatia, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:17:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:29.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:17:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:29.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:17:29 compute-0 nova_compute[247516]: 2026-01-22 00:17:29.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:17:29 compute-0 nova_compute[247516]: 2026-01-22 00:17:29.995 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:17:29 compute-0 nova_compute[247516]: 2026-01-22 00:17:29.995 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:17:30 compute-0 nova_compute[247516]: 2026-01-22 00:17:30.022 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:17:30 compute-0 unruffled_hypatia[285416]: {
Jan 22 00:17:30 compute-0 unruffled_hypatia[285416]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:17:30 compute-0 unruffled_hypatia[285416]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:17:30 compute-0 unruffled_hypatia[285416]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:17:30 compute-0 unruffled_hypatia[285416]:         "osd_id": 1,
Jan 22 00:17:30 compute-0 unruffled_hypatia[285416]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:17:30 compute-0 unruffled_hypatia[285416]:         "type": "bluestore"
Jan 22 00:17:30 compute-0 unruffled_hypatia[285416]:     }
Jan 22 00:17:30 compute-0 unruffled_hypatia[285416]: }
Jan 22 00:17:30 compute-0 systemd[1]: libpod-2c74d98da5848484f7c8a116934041fc8a742f548b7ea71ec456e0e1ab574a4e.scope: Deactivated successfully.
Jan 22 00:17:30 compute-0 systemd[1]: libpod-2c74d98da5848484f7c8a116934041fc8a742f548b7ea71ec456e0e1ab574a4e.scope: Consumed 1.007s CPU time.
Jan 22 00:17:30 compute-0 podman[285400]: 2026-01-22 00:17:30.203494951 +0000 UTC m=+1.175965065 container died 2c74d98da5848484f7c8a116934041fc8a742f548b7ea71ec456e0e1ab574a4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 00:17:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e4f7c38f951a014bc4fcb9086386745e68fb8387bf8fd455aa91432c28e9c0f-merged.mount: Deactivated successfully.
Jan 22 00:17:30 compute-0 podman[285400]: 2026-01-22 00:17:30.274668464 +0000 UTC m=+1.247138548 container remove 2c74d98da5848484f7c8a116934041fc8a742f548b7ea71ec456e0e1ab574a4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:17:30 compute-0 ceph-mon[74318]: pgmap v1795: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:30 compute-0 systemd[1]: libpod-conmon-2c74d98da5848484f7c8a116934041fc8a742f548b7ea71ec456e0e1ab574a4e.scope: Deactivated successfully.
Jan 22 00:17:30 compute-0 sudo[285293]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:17:30 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:17:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:17:30 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:17:30 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 9a8e7bc4-53d9-4291-bb6c-8e7eb61d32e1 does not exist
Jan 22 00:17:30 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 1644c8f7-348e-4677-bf13-9e5ab8cc8083 does not exist
Jan 22 00:17:30 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 9dc74c89-07dd-4850-bf31-2df37b92d3ee does not exist
Jan 22 00:17:30 compute-0 sudo[285452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:17:30 compute-0 sudo[285452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:30 compute-0 sudo[285452]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:30 compute-0 sudo[285477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:17:30 compute-0 sudo[285477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:30 compute-0 sudo[285477]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:17:31 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:17:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:31.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:31 compute-0 nova_compute[247516]: 2026-01-22 00:17:31.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:17:31 compute-0 nova_compute[247516]: 2026-01-22 00:17:31.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:17:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:31.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:32 compute-0 ceph-mon[74318]: pgmap v1796: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:33 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/82362566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:17:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:33.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:17:33 compute-0 nova_compute[247516]: 2026-01-22 00:17:33.989 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:17:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:17:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:33.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:17:34 compute-0 ceph-mon[74318]: pgmap v1797: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:34 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/4144173070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:17:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:17:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:35.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:17:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:17:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:35.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:17:36 compute-0 ceph-mon[74318]: pgmap v1798: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:37.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:37 compute-0 nova_compute[247516]: 2026-01-22 00:17:37.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:17:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:38.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:38 compute-0 podman[285506]: 2026-01-22 00:17:38.059474119 +0000 UTC m=+0.163494680 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 22 00:17:38 compute-0 ceph-mon[74318]: pgmap v1799: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:17:38 compute-0 nova_compute[247516]: 2026-01-22 00:17:38.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:17:39
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', '.rgw.root', 'backups', 'volumes', 'default.rgw.control']
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:17:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:39.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:17:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:17:39 compute-0 nova_compute[247516]: 2026-01-22 00:17:39.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:17:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:40.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:40 compute-0 ceph-mon[74318]: pgmap v1800: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:41 compute-0 sudo[285534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:17:41 compute-0 sudo[285534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:41 compute-0 sudo[285534]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:41 compute-0 sudo[285559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:17:41 compute-0 sudo[285559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:17:41 compute-0 sudo[285559]: pam_unix(sudo:session): session closed for user root
Jan 22 00:17:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:17:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:41.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:17:41 compute-0 nova_compute[247516]: 2026-01-22 00:17:41.987 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:17:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:42.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:42 compute-0 nova_compute[247516]: 2026-01-22 00:17:42.010 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:17:42 compute-0 nova_compute[247516]: 2026-01-22 00:17:42.036 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:17:42 compute-0 nova_compute[247516]: 2026-01-22 00:17:42.036 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:17:42 compute-0 nova_compute[247516]: 2026-01-22 00:17:42.036 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:17:42 compute-0 nova_compute[247516]: 2026-01-22 00:17:42.037 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:17:42 compute-0 nova_compute[247516]: 2026-01-22 00:17:42.037 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:17:42 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:17:42 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3700357723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:17:42 compute-0 nova_compute[247516]: 2026-01-22 00:17:42.534 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:17:42 compute-0 ceph-mon[74318]: pgmap v1801: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:42 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3700357723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:17:42 compute-0 nova_compute[247516]: 2026-01-22 00:17:42.716 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:17:42 compute-0 nova_compute[247516]: 2026-01-22 00:17:42.718 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5145MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:17:42 compute-0 nova_compute[247516]: 2026-01-22 00:17:42.719 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:17:42 compute-0 nova_compute[247516]: 2026-01-22 00:17:42.719 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:17:42 compute-0 nova_compute[247516]: 2026-01-22 00:17:42.855 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:17:42 compute-0 nova_compute[247516]: 2026-01-22 00:17:42.856 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:17:42 compute-0 nova_compute[247516]: 2026-01-22 00:17:42.857 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:17:42 compute-0 nova_compute[247516]: 2026-01-22 00:17:42.931 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing inventories for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 22 00:17:43 compute-0 nova_compute[247516]: 2026-01-22 00:17:43.019 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Updating ProviderTree inventory for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 22 00:17:43 compute-0 nova_compute[247516]: 2026-01-22 00:17:43.020 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Updating inventory in ProviderTree for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 00:17:43 compute-0 nova_compute[247516]: 2026-01-22 00:17:43.036 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing aggregate associations for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 22 00:17:43 compute-0 nova_compute[247516]: 2026-01-22 00:17:43.056 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Refreshing trait associations for resource provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 22 00:17:43 compute-0 nova_compute[247516]: 2026-01-22 00:17:43.089 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:17:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:17:43 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684111980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:17:43 compute-0 nova_compute[247516]: 2026-01-22 00:17:43.572 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:17:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:17:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:43.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:17:43 compute-0 nova_compute[247516]: 2026-01-22 00:17:43.581 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:17:43 compute-0 nova_compute[247516]: 2026-01-22 00:17:43.621 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:17:43 compute-0 nova_compute[247516]: 2026-01-22 00:17:43.624 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:17:43 compute-0 nova_compute[247516]: 2026-01-22 00:17:43.625 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:17:43 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2684111980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:17:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:17:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:44.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:44 compute-0 ceph-mon[74318]: pgmap v1802: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:17:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:45.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:17:45 compute-0 nova_compute[247516]: 2026-01-22 00:17:45.607 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:17:45 compute-0 nova_compute[247516]: 2026-01-22 00:17:45.608 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:17:45 compute-0 podman[285631]: 2026-01-22 00:17:45.971302736 +0000 UTC m=+0.074494136 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 22 00:17:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:46.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:46 compute-0 ceph-mon[74318]: pgmap v1803: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:17:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:47.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:17:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:17:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:48.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:17:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:17:48.779 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:17:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:17:48.781 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:17:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:17:48.781 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:17:48 compute-0 ceph-mon[74318]: pgmap v1804: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:17:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:49.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:50.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:50 compute-0 ceph-mon[74318]: pgmap v1805: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:17:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:51.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:17:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:17:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:52.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:17:52 compute-0 ceph-mon[74318]: pgmap v1806: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:53.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:17:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:17:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:54.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:17:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:17:54 compute-0 ceph-mon[74318]: pgmap v1807: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:55.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:56.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:56 compute-0 ceph-mon[74318]: pgmap v1808: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:17:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:57.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:17:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:17:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:17:58.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:17:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:17:59 compute-0 ceph-mon[74318]: pgmap v1809: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:17:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:17:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:17:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:17:59.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:18:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:18:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:00.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:18:00 compute-0 ceph-mon[74318]: pgmap v1810: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:01 compute-0 sudo[285657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:18:01 compute-0 sudo[285657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:01 compute-0 sudo[285657]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:01 compute-0 sudo[285682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:18:01 compute-0 sudo[285682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:01 compute-0 sudo[285682]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:01.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:02.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:02 compute-0 ceph-mon[74318]: pgmap v1811: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:03.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:18:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:18:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:04.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:18:04 compute-0 ceph-mon[74318]: pgmap v1812: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:18:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:05.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:18:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:06.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:06 compute-0 ceph-mon[74318]: pgmap v1813: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:07.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:08.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:08 compute-0 ceph-mon[74318]: pgmap v1814: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:18:09 compute-0 podman[285711]: 2026-01-22 00:18:09.013629306 +0000 UTC m=+0.118634012 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:18:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:18:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:18:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:18:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:18:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:18:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:18:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:09.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:10.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:10 compute-0 ceph-mon[74318]: pgmap v1815: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:18:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:11.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:18:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:18:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:12.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:18:12 compute-0 ceph-mon[74318]: pgmap v1816: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:13.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:18:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:14.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:14 compute-0 ceph-mon[74318]: pgmap v1817: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:15.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:16.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:16 compute-0 podman[285741]: 2026-01-22 00:18:16.94370908 +0000 UTC m=+0.054466747 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 00:18:16 compute-0 ceph-mon[74318]: pgmap v1818: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:17.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:18.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:18 compute-0 ceph-mon[74318]: pgmap v1819: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:18:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:18:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:19.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:18:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:20.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:20 compute-0 ceph-mon[74318]: pgmap v1820: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:21 compute-0 sudo[285764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:18:21 compute-0 sudo[285764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:21 compute-0 sudo[285764]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:21 compute-0 sudo[285790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:18:21 compute-0 sudo[285790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:21 compute-0 sudo[285790]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:21.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:18:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:22.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:18:22 compute-0 ceph-mon[74318]: pgmap v1821: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:18:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:23.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:18:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:18:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:18:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:24.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:18:25 compute-0 ceph-mon[74318]: pgmap v1822: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:18:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:25.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:18:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:18:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:26.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:18:26 compute-0 ceph-mon[74318]: pgmap v1823: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1869011010' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:18:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1869011010' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:18:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:18:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:27.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:18:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:28.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:18:29 compute-0 ceph-mon[74318]: pgmap v1824: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3145912454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:18:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:29.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:30.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:30 compute-0 ceph-mon[74318]: pgmap v1825: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:30 compute-0 sudo[285819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:18:30 compute-0 sudo[285819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:30 compute-0 sudo[285819]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:30 compute-0 sudo[285844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:18:30 compute-0 sudo[285844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:30 compute-0 sudo[285844]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:31 compute-0 sudo[285869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:18:31 compute-0 sudo[285869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:31 compute-0 sudo[285869]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:31 compute-0 sudo[285894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 00:18:31 compute-0 sudo[285894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2835043399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:18:31 compute-0 sudo[285894]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:31.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 22 00:18:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 00:18:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:18:31 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:18:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:18:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:18:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:18:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:18:31 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 7bfae116-c54c-4b0d-b39d-eed0c1996aac does not exist
Jan 22 00:18:31 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev cdd87f5f-88b5-425a-afab-483059323eda does not exist
Jan 22 00:18:31 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev c7cfbdd0-7fdc-492a-a1c5-cbe56cf1a519 does not exist
Jan 22 00:18:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:18:31 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:18:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:18:31 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:18:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:18:31 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:18:31 compute-0 sudo[285951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:18:31 compute-0 sudo[285951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:31 compute-0 sudo[285951]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:31 compute-0 sudo[285976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:18:31 compute-0 sudo[285976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:31 compute-0 sudo[285976]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:31 compute-0 nova_compute[247516]: 2026-01-22 00:18:31.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:18:31 compute-0 nova_compute[247516]: 2026-01-22 00:18:31.994 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:18:31 compute-0 nova_compute[247516]: 2026-01-22 00:18:31.994 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:18:32 compute-0 sudo[286001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:18:32 compute-0 sudo[286001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:32 compute-0 sudo[286001]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:32.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:32 compute-0 sudo[286026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:18:32 compute-0 sudo[286026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:32 compute-0 nova_compute[247516]: 2026-01-22 00:18:32.355 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:18:32 compute-0 ceph-mon[74318]: pgmap v1826: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:32 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 00:18:32 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:18:32 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:18:32 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:18:32 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:18:32 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:18:32 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:18:32 compute-0 podman[286090]: 2026-01-22 00:18:32.445295036 +0000 UTC m=+0.052590078 container create 16c934ffc88f147e6d7d297c5987f81734bf76d1307d98bd8554810d7d62f831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carver, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:18:32 compute-0 systemd[1]: Started libpod-conmon-16c934ffc88f147e6d7d297c5987f81734bf76d1307d98bd8554810d7d62f831.scope.
Jan 22 00:18:32 compute-0 podman[286090]: 2026-01-22 00:18:32.425929697 +0000 UTC m=+0.033224769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:18:32 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:18:32 compute-0 podman[286090]: 2026-01-22 00:18:32.547427896 +0000 UTC m=+0.154722968 container init 16c934ffc88f147e6d7d297c5987f81734bf76d1307d98bd8554810d7d62f831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:18:32 compute-0 podman[286090]: 2026-01-22 00:18:32.561398239 +0000 UTC m=+0.168693321 container start 16c934ffc88f147e6d7d297c5987f81734bf76d1307d98bd8554810d7d62f831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 00:18:32 compute-0 podman[286090]: 2026-01-22 00:18:32.566092385 +0000 UTC m=+0.173387457 container attach 16c934ffc88f147e6d7d297c5987f81734bf76d1307d98bd8554810d7d62f831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 22 00:18:32 compute-0 eager_carver[286106]: 167 167
Jan 22 00:18:32 compute-0 systemd[1]: libpod-16c934ffc88f147e6d7d297c5987f81734bf76d1307d98bd8554810d7d62f831.scope: Deactivated successfully.
Jan 22 00:18:32 compute-0 podman[286090]: 2026-01-22 00:18:32.571223283 +0000 UTC m=+0.178518355 container died 16c934ffc88f147e6d7d297c5987f81734bf76d1307d98bd8554810d7d62f831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carver, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 00:18:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-409b7fe072be9810f368f7e16610fde118baf90e7941cd3a38ab33f6698f335e-merged.mount: Deactivated successfully.
Jan 22 00:18:32 compute-0 podman[286090]: 2026-01-22 00:18:32.621970604 +0000 UTC m=+0.229265636 container remove 16c934ffc88f147e6d7d297c5987f81734bf76d1307d98bd8554810d7d62f831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Jan 22 00:18:32 compute-0 systemd[1]: libpod-conmon-16c934ffc88f147e6d7d297c5987f81734bf76d1307d98bd8554810d7d62f831.scope: Deactivated successfully.
Jan 22 00:18:32 compute-0 podman[286131]: 2026-01-22 00:18:32.784618818 +0000 UTC m=+0.044868910 container create d02f3c18b134c0c2c984e9f186fb19fc2cab8c16bb4f369000d58dc460c0e8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_grothendieck, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:18:32 compute-0 systemd[1]: Started libpod-conmon-d02f3c18b134c0c2c984e9f186fb19fc2cab8c16bb4f369000d58dc460c0e8c0.scope.
Jan 22 00:18:32 compute-0 podman[286131]: 2026-01-22 00:18:32.76339144 +0000 UTC m=+0.023641542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:18:32 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a817b3715b77d22c7d79886aeb2d76503c6f20e9e38835905b92b9d9c52b1cf6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a817b3715b77d22c7d79886aeb2d76503c6f20e9e38835905b92b9d9c52b1cf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a817b3715b77d22c7d79886aeb2d76503c6f20e9e38835905b92b9d9c52b1cf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a817b3715b77d22c7d79886aeb2d76503c6f20e9e38835905b92b9d9c52b1cf6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a817b3715b77d22c7d79886aeb2d76503c6f20e9e38835905b92b9d9c52b1cf6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:18:32 compute-0 podman[286131]: 2026-01-22 00:18:32.887400598 +0000 UTC m=+0.147650750 container init d02f3c18b134c0c2c984e9f186fb19fc2cab8c16bb4f369000d58dc460c0e8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_grothendieck, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 00:18:32 compute-0 podman[286131]: 2026-01-22 00:18:32.89522395 +0000 UTC m=+0.155474072 container start d02f3c18b134c0c2c984e9f186fb19fc2cab8c16bb4f369000d58dc460c0e8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 00:18:32 compute-0 podman[286131]: 2026-01-22 00:18:32.898806351 +0000 UTC m=+0.159056483 container attach d02f3c18b134c0c2c984e9f186fb19fc2cab8c16bb4f369000d58dc460c0e8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 00:18:32 compute-0 nova_compute[247516]: 2026-01-22 00:18:32.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:18:32 compute-0 nova_compute[247516]: 2026-01-22 00:18:32.994 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:18:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:33.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:33 compute-0 focused_grothendieck[286147]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:18:33 compute-0 focused_grothendieck[286147]: --> relative data size: 1.0
Jan 22 00:18:33 compute-0 focused_grothendieck[286147]: --> All data devices are unavailable
Jan 22 00:18:33 compute-0 systemd[1]: libpod-d02f3c18b134c0c2c984e9f186fb19fc2cab8c16bb4f369000d58dc460c0e8c0.scope: Deactivated successfully.
Jan 22 00:18:33 compute-0 conmon[286147]: conmon d02f3c18b134c0c2c984 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d02f3c18b134c0c2c984e9f186fb19fc2cab8c16bb4f369000d58dc460c0e8c0.scope/container/memory.events
Jan 22 00:18:33 compute-0 podman[286131]: 2026-01-22 00:18:33.843071224 +0000 UTC m=+1.103321326 container died d02f3c18b134c0c2c984e9f186fb19fc2cab8c16bb4f369000d58dc460c0e8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 00:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a817b3715b77d22c7d79886aeb2d76503c6f20e9e38835905b92b9d9c52b1cf6-merged.mount: Deactivated successfully.
Jan 22 00:18:33 compute-0 podman[286131]: 2026-01-22 00:18:33.918422247 +0000 UTC m=+1.178672329 container remove d02f3c18b134c0c2c984e9f186fb19fc2cab8c16bb4f369000d58dc460c0e8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_grothendieck, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:18:33 compute-0 systemd[1]: libpod-conmon-d02f3c18b134c0c2c984e9f186fb19fc2cab8c16bb4f369000d58dc460c0e8c0.scope: Deactivated successfully.
Jan 22 00:18:33 compute-0 sudo[286026]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:18:33 compute-0 nova_compute[247516]: 2026-01-22 00:18:33.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:18:34 compute-0 sudo[286176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:18:34 compute-0 sudo[286176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:34 compute-0 sudo[286176]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:18:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:34.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:18:34 compute-0 sudo[286201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:18:34 compute-0 sudo[286201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:34 compute-0 sudo[286201]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:34 compute-0 sudo[286226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:18:34 compute-0 sudo[286226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:34 compute-0 sudo[286226]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:34 compute-0 sudo[286251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:18:34 compute-0 sudo[286251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:34 compute-0 ceph-mon[74318]: pgmap v1827: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:34 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3662519353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:18:34 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/101917025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:18:34 compute-0 podman[286316]: 2026-01-22 00:18:34.597461742 +0000 UTC m=+0.034685455 container create 6db82a5f95466bee7435472634d9136bcdbf8d8eb3aee7bdc492804f978e1cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 00:18:34 compute-0 systemd[1]: Started libpod-conmon-6db82a5f95466bee7435472634d9136bcdbf8d8eb3aee7bdc492804f978e1cce.scope.
Jan 22 00:18:34 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:18:34 compute-0 podman[286316]: 2026-01-22 00:18:34.671676289 +0000 UTC m=+0.108900052 container init 6db82a5f95466bee7435472634d9136bcdbf8d8eb3aee7bdc492804f978e1cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 00:18:34 compute-0 podman[286316]: 2026-01-22 00:18:34.582377654 +0000 UTC m=+0.019601387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:18:34 compute-0 podman[286316]: 2026-01-22 00:18:34.679749159 +0000 UTC m=+0.116972872 container start 6db82a5f95466bee7435472634d9136bcdbf8d8eb3aee7bdc492804f978e1cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 00:18:34 compute-0 podman[286316]: 2026-01-22 00:18:34.683166904 +0000 UTC m=+0.120390617 container attach 6db82a5f95466bee7435472634d9136bcdbf8d8eb3aee7bdc492804f978e1cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_snyder, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:18:34 compute-0 lucid_snyder[286332]: 167 167
Jan 22 00:18:34 compute-0 systemd[1]: libpod-6db82a5f95466bee7435472634d9136bcdbf8d8eb3aee7bdc492804f978e1cce.scope: Deactivated successfully.
Jan 22 00:18:34 compute-0 podman[286316]: 2026-01-22 00:18:34.688138638 +0000 UTC m=+0.125362361 container died 6db82a5f95466bee7435472634d9136bcdbf8d8eb3aee7bdc492804f978e1cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_snyder, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 00:18:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4148f0590fd7f185571fe604e31062cf233375a3d4b71d2753d93e5cd84c611-merged.mount: Deactivated successfully.
Jan 22 00:18:34 compute-0 podman[286316]: 2026-01-22 00:18:34.737612909 +0000 UTC m=+0.174836662 container remove 6db82a5f95466bee7435472634d9136bcdbf8d8eb3aee7bdc492804f978e1cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_snyder, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 00:18:34 compute-0 systemd[1]: libpod-conmon-6db82a5f95466bee7435472634d9136bcdbf8d8eb3aee7bdc492804f978e1cce.scope: Deactivated successfully.
Jan 22 00:18:34 compute-0 podman[286355]: 2026-01-22 00:18:34.951639463 +0000 UTC m=+0.065006903 container create e4e81abbbd5532479514664f864a336cd03c0be77ff796d0264819fe7b1c3a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 00:18:34 compute-0 systemd[1]: Started libpod-conmon-e4e81abbbd5532479514664f864a336cd03c0be77ff796d0264819fe7b1c3a20.scope.
Jan 22 00:18:35 compute-0 podman[286355]: 2026-01-22 00:18:34.930326503 +0000 UTC m=+0.043693973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:18:35 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9233bc6914fde5bf0b65d72bfbd2b63c07742621518cbe9b78e5f1d0b976bde1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9233bc6914fde5bf0b65d72bfbd2b63c07742621518cbe9b78e5f1d0b976bde1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9233bc6914fde5bf0b65d72bfbd2b63c07742621518cbe9b78e5f1d0b976bde1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9233bc6914fde5bf0b65d72bfbd2b63c07742621518cbe9b78e5f1d0b976bde1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:18:35 compute-0 podman[286355]: 2026-01-22 00:18:35.045950572 +0000 UTC m=+0.159318042 container init e4e81abbbd5532479514664f864a336cd03c0be77ff796d0264819fe7b1c3a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:18:35 compute-0 podman[286355]: 2026-01-22 00:18:35.053379111 +0000 UTC m=+0.166746581 container start e4e81abbbd5532479514664f864a336cd03c0be77ff796d0264819fe7b1c3a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:18:35 compute-0 podman[286355]: 2026-01-22 00:18:35.057756767 +0000 UTC m=+0.171124237 container attach e4e81abbbd5532479514664f864a336cd03c0be77ff796d0264819fe7b1c3a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:18:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:35.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]: {
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:     "1": [
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:         {
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:             "devices": [
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:                 "/dev/loop3"
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:             ],
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:             "lv_name": "ceph_lv0",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:             "lv_size": "7511998464",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:             "name": "ceph_lv0",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:             "tags": {
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:                 "ceph.cluster_name": "ceph",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:                 "ceph.crush_device_class": "",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:                 "ceph.encrypted": "0",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:                 "ceph.osd_id": "1",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:                 "ceph.type": "block",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:                 "ceph.vdo": "0"
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:             },
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:             "type": "block",
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:             "vg_name": "ceph_vg0"
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:         }
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]:     ]
Jan 22 00:18:35 compute-0 hopeful_brattain[286371]: }
Jan 22 00:18:35 compute-0 systemd[1]: libpod-e4e81abbbd5532479514664f864a336cd03c0be77ff796d0264819fe7b1c3a20.scope: Deactivated successfully.
Jan 22 00:18:35 compute-0 podman[286355]: 2026-01-22 00:18:35.858029174 +0000 UTC m=+0.971396604 container died e4e81abbbd5532479514664f864a336cd03c0be77ff796d0264819fe7b1c3a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 00:18:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9233bc6914fde5bf0b65d72bfbd2b63c07742621518cbe9b78e5f1d0b976bde1-merged.mount: Deactivated successfully.
Jan 22 00:18:35 compute-0 podman[286355]: 2026-01-22 00:18:35.913856762 +0000 UTC m=+1.027224192 container remove e4e81abbbd5532479514664f864a336cd03c0be77ff796d0264819fe7b1c3a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brattain, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 00:18:35 compute-0 systemd[1]: libpod-conmon-e4e81abbbd5532479514664f864a336cd03c0be77ff796d0264819fe7b1c3a20.scope: Deactivated successfully.
Jan 22 00:18:35 compute-0 sudo[286251]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:36 compute-0 sudo[286395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:18:36 compute-0 sudo[286395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:36 compute-0 sudo[286395]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:36.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:36 compute-0 sudo[286420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:18:36 compute-0 sudo[286420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:36 compute-0 sudo[286420]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:36 compute-0 sudo[286445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:18:36 compute-0 sudo[286445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:36 compute-0 sudo[286445]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:36 compute-0 sudo[286470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:18:36 compute-0 sudo[286470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:36 compute-0 ceph-mon[74318]: pgmap v1828: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:36 compute-0 podman[286537]: 2026-01-22 00:18:36.67569854 +0000 UTC m=+0.059619657 container create 840d96cf3d90b8742ea43108bbf1636019b1925e7ae9fb1d10157d02d313e69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bouman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:18:36 compute-0 systemd[1]: Started libpod-conmon-840d96cf3d90b8742ea43108bbf1636019b1925e7ae9fb1d10157d02d313e69c.scope.
Jan 22 00:18:36 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:18:36 compute-0 podman[286537]: 2026-01-22 00:18:36.6508208 +0000 UTC m=+0.034741987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:18:36 compute-0 podman[286537]: 2026-01-22 00:18:36.75585516 +0000 UTC m=+0.139776287 container init 840d96cf3d90b8742ea43108bbf1636019b1925e7ae9fb1d10157d02d313e69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 00:18:36 compute-0 podman[286537]: 2026-01-22 00:18:36.762749334 +0000 UTC m=+0.146670441 container start 840d96cf3d90b8742ea43108bbf1636019b1925e7ae9fb1d10157d02d313e69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 00:18:36 compute-0 podman[286537]: 2026-01-22 00:18:36.766639494 +0000 UTC m=+0.150560621 container attach 840d96cf3d90b8742ea43108bbf1636019b1925e7ae9fb1d10157d02d313e69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 00:18:36 compute-0 lucid_bouman[286553]: 167 167
Jan 22 00:18:36 compute-0 systemd[1]: libpod-840d96cf3d90b8742ea43108bbf1636019b1925e7ae9fb1d10157d02d313e69c.scope: Deactivated successfully.
Jan 22 00:18:36 compute-0 podman[286537]: 2026-01-22 00:18:36.770448692 +0000 UTC m=+0.154369799 container died 840d96cf3d90b8742ea43108bbf1636019b1925e7ae9fb1d10157d02d313e69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bouman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:18:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2801e2888011e7b8c5cbf4938fa90779099b6e9cff671d003d6e2e2591bccf9a-merged.mount: Deactivated successfully.
Jan 22 00:18:36 compute-0 podman[286537]: 2026-01-22 00:18:36.81239451 +0000 UTC m=+0.196315617 container remove 840d96cf3d90b8742ea43108bbf1636019b1925e7ae9fb1d10157d02d313e69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 00:18:36 compute-0 systemd[1]: libpod-conmon-840d96cf3d90b8742ea43108bbf1636019b1925e7ae9fb1d10157d02d313e69c.scope: Deactivated successfully.
Jan 22 00:18:36 compute-0 podman[286577]: 2026-01-22 00:18:36.974849258 +0000 UTC m=+0.047893433 container create 833f92b9d83f05956043c6be264388bf3a58c3273b54b35a042eb8d628508b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dijkstra, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:18:37 compute-0 systemd[1]: Started libpod-conmon-833f92b9d83f05956043c6be264388bf3a58c3273b54b35a042eb8d628508b9f.scope.
Jan 22 00:18:37 compute-0 podman[286577]: 2026-01-22 00:18:36.950071491 +0000 UTC m=+0.023115756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:18:37 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:18:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d7ea9b8c47f3e226b57a92c7a23762446f275fdbf681416b4b52bf46ae3a61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:18:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d7ea9b8c47f3e226b57a92c7a23762446f275fdbf681416b4b52bf46ae3a61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:18:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d7ea9b8c47f3e226b57a92c7a23762446f275fdbf681416b4b52bf46ae3a61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:18:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d7ea9b8c47f3e226b57a92c7a23762446f275fdbf681416b4b52bf46ae3a61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:18:37 compute-0 podman[286577]: 2026-01-22 00:18:37.069473476 +0000 UTC m=+0.142517671 container init 833f92b9d83f05956043c6be264388bf3a58c3273b54b35a042eb8d628508b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:18:37 compute-0 podman[286577]: 2026-01-22 00:18:37.081122347 +0000 UTC m=+0.154166512 container start 833f92b9d83f05956043c6be264388bf3a58c3273b54b35a042eb8d628508b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dijkstra, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:18:37 compute-0 podman[286577]: 2026-01-22 00:18:37.085017487 +0000 UTC m=+0.158061702 container attach 833f92b9d83f05956043c6be264388bf3a58c3273b54b35a042eb8d628508b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dijkstra, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:18:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:18:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:37.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:18:37 compute-0 naughty_dijkstra[286593]: {
Jan 22 00:18:37 compute-0 naughty_dijkstra[286593]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:18:37 compute-0 naughty_dijkstra[286593]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:18:37 compute-0 naughty_dijkstra[286593]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:18:37 compute-0 naughty_dijkstra[286593]:         "osd_id": 1,
Jan 22 00:18:37 compute-0 naughty_dijkstra[286593]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:18:37 compute-0 naughty_dijkstra[286593]:         "type": "bluestore"
Jan 22 00:18:37 compute-0 naughty_dijkstra[286593]:     }
Jan 22 00:18:37 compute-0 naughty_dijkstra[286593]: }
Jan 22 00:18:38 compute-0 systemd[1]: libpod-833f92b9d83f05956043c6be264388bf3a58c3273b54b35a042eb8d628508b9f.scope: Deactivated successfully.
Jan 22 00:18:38 compute-0 podman[286615]: 2026-01-22 00:18:38.068116592 +0000 UTC m=+0.035080586 container died 833f92b9d83f05956043c6be264388bf3a58c3273b54b35a042eb8d628508b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:18:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:38.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6d7ea9b8c47f3e226b57a92c7a23762446f275fdbf681416b4b52bf46ae3a61-merged.mount: Deactivated successfully.
Jan 22 00:18:38 compute-0 podman[286615]: 2026-01-22 00:18:38.136837819 +0000 UTC m=+0.103801783 container remove 833f92b9d83f05956043c6be264388bf3a58c3273b54b35a042eb8d628508b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:18:38 compute-0 systemd[1]: libpod-conmon-833f92b9d83f05956043c6be264388bf3a58c3273b54b35a042eb8d628508b9f.scope: Deactivated successfully.
Jan 22 00:18:38 compute-0 sudo[286470]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:18:38 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:18:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:18:38 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:18:38 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 6f59abc9-180e-4954-b9b6-71b0f0956468 does not exist
Jan 22 00:18:38 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 779d6b57-0495-4128-b00c-78643e8b50ab does not exist
Jan 22 00:18:38 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 61f1d8eb-3481-4657-9ce9-796069ca825e does not exist
Jan 22 00:18:38 compute-0 sudo[286630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:18:38 compute-0 sudo[286630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:38 compute-0 sudo[286630]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:38 compute-0 sudo[286655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:18:38 compute-0 sudo[286655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:38 compute-0 sudo[286655]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:38 compute-0 ceph-mon[74318]: pgmap v1829: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:38 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:18:38 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:18:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:18:39
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['.rgw.root', 'vms', '.mgr', 'backups', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images']
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:18:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:18:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:39.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:18:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:18:39 compute-0 nova_compute[247516]: 2026-01-22 00:18:39.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:18:40 compute-0 podman[286681]: 2026-01-22 00:18:40.005700607 +0000 UTC m=+0.113779191 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 00:18:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:40.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:40 compute-0 ceph-mon[74318]: pgmap v1830: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:40 compute-0 nova_compute[247516]: 2026-01-22 00:18:40.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:18:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:41 compute-0 sudo[286709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:18:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:41 compute-0 sudo[286709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:41.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:41 compute-0 sudo[286709]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:41 compute-0 sudo[286734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:18:41 compute-0 sudo[286734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:18:41 compute-0 sudo[286734]: pam_unix(sudo:session): session closed for user root
Jan 22 00:18:41 compute-0 nova_compute[247516]: 2026-01-22 00:18:41.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:18:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:18:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:42.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:18:42 compute-0 ceph-mon[74318]: pgmap v1831: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:42 compute-0 nova_compute[247516]: 2026-01-22 00:18:42.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:18:43 compute-0 nova_compute[247516]: 2026-01-22 00:18:43.019 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:18:43 compute-0 nova_compute[247516]: 2026-01-22 00:18:43.020 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:18:43 compute-0 nova_compute[247516]: 2026-01-22 00:18:43.021 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:18:43 compute-0 nova_compute[247516]: 2026-01-22 00:18:43.021 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:18:43 compute-0 nova_compute[247516]: 2026-01-22 00:18:43.022 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:18:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:18:43 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1815840307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:18:43 compute-0 nova_compute[247516]: 2026-01-22 00:18:43.484 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:18:43 compute-0 nova_compute[247516]: 2026-01-22 00:18:43.668 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:18:43 compute-0 nova_compute[247516]: 2026-01-22 00:18:43.670 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5109MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:18:43 compute-0 nova_compute[247516]: 2026-01-22 00:18:43.670 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:18:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:43 compute-0 nova_compute[247516]: 2026-01-22 00:18:43.671 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:18:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:43.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:43 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1815840307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:18:43 compute-0 nova_compute[247516]: 2026-01-22 00:18:43.816 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:18:43 compute-0 nova_compute[247516]: 2026-01-22 00:18:43.816 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:18:43 compute-0 nova_compute[247516]: 2026-01-22 00:18:43.817 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:18:43 compute-0 nova_compute[247516]: 2026-01-22 00:18:43.870 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:18:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:18:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:44.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:44 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:18:44 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3309397963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:18:44 compute-0 nova_compute[247516]: 2026-01-22 00:18:44.393 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:18:44 compute-0 nova_compute[247516]: 2026-01-22 00:18:44.401 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:18:44 compute-0 nova_compute[247516]: 2026-01-22 00:18:44.430 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:18:44 compute-0 nova_compute[247516]: 2026-01-22 00:18:44.434 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:18:44 compute-0 nova_compute[247516]: 2026-01-22 00:18:44.435 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:18:44 compute-0 ceph-mon[74318]: pgmap v1832: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:44 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3309397963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:18:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:45 compute-0 nova_compute[247516]: 2026-01-22 00:18:45.436 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:18:45 compute-0 nova_compute[247516]: 2026-01-22 00:18:45.437 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:18:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:45.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:46.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:46 compute-0 ceph-mon[74318]: pgmap v1833: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:47.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:47 compute-0 podman[286806]: 2026-01-22 00:18:47.966759499 +0000 UTC m=+0.066803770 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 00:18:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:48.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:18:48.780 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:18:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:18:48.782 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:18:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:18:48.782 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:18:48 compute-0 ceph-mon[74318]: pgmap v1834: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:18:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:49.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:50.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:50 compute-0 ceph-mon[74318]: pgmap v1835: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:51.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:52.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:52 compute-0 ceph-mon[74318]: pgmap v1836: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:53.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:18:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:18:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:54.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:18:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:18:55 compute-0 ceph-mon[74318]: pgmap v1837: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:18:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:55.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:18:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:18:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:56.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:18:56 compute-0 ceph-mon[74318]: pgmap v1838: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:18:56.294759) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769041136294815, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 2109, "num_deletes": 251, "total_data_size": 3922488, "memory_usage": 3977440, "flush_reason": "Manual Compaction"}
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769041136339669, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 3844846, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38532, "largest_seqno": 40640, "table_properties": {"data_size": 3835286, "index_size": 6054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19256, "raw_average_key_size": 20, "raw_value_size": 3816387, "raw_average_value_size": 4025, "num_data_blocks": 264, "num_entries": 948, "num_filter_entries": 948, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769040915, "oldest_key_time": 1769040915, "file_creation_time": 1769041136, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 45019 microseconds, and 12192 cpu microseconds.
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:18:56.339773) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 3844846 bytes OK
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:18:56.339805) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:18:56.365888) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:18:56.365950) EVENT_LOG_v1 {"time_micros": 1769041136365937, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:18:56.365980) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 3914026, prev total WAL file size 3914026, number of live WAL files 2.
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:18:56.367364) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(3754KB)], [86(8265KB)]
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769041136367682, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 12309139, "oldest_snapshot_seqno": -1}
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6452 keys, 10342755 bytes, temperature: kUnknown
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769041136470485, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10342755, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10300273, "index_size": 25220, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 165053, "raw_average_key_size": 25, "raw_value_size": 10184619, "raw_average_value_size": 1578, "num_data_blocks": 1014, "num_entries": 6452, "num_filter_entries": 6452, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769041136, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:18:56.470883) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10342755 bytes
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:18:56.472594) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.6 rd, 100.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.1 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 6971, records dropped: 519 output_compression: NoCompression
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:18:56.472612) EVENT_LOG_v1 {"time_micros": 1769041136472602, "job": 50, "event": "compaction_finished", "compaction_time_micros": 102946, "compaction_time_cpu_micros": 29318, "output_level": 6, "num_output_files": 1, "total_output_size": 10342755, "num_input_records": 6971, "num_output_records": 6452, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769041136473370, "job": 50, "event": "table_file_deletion", "file_number": 88}
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769041136475262, "job": 50, "event": "table_file_deletion", "file_number": 86}
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:18:56.367196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:18:56.475377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:18:56.475384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:18:56.475390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:18:56.475392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:18:56 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:18:56.475393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:18:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:18:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:57.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:18:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:18:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:18:58.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:18:58 compute-0 ceph-mon[74318]: pgmap v1839: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:18:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:18:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:18:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:18:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:18:59.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:00.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:00 compute-0 ceph-mon[74318]: pgmap v1840: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:01.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:01 compute-0 sudo[286832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:19:01 compute-0 sudo[286832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:01 compute-0 sudo[286832]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:01 compute-0 sudo[286857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:19:01 compute-0 sudo[286857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:01 compute-0 sudo[286857]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:02.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:02 compute-0 ceph-mon[74318]: pgmap v1841: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:03.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:19:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:04.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:04 compute-0 ceph-mon[74318]: pgmap v1842: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:05.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:19:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:06.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:19:07 compute-0 ceph-mon[74318]: pgmap v1843: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:07.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:08.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:08 compute-0 ceph-mon[74318]: pgmap v1844: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:19:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:19:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:19:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:19:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:19:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:19:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:19:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:09.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:10.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:10 compute-0 ceph-mon[74318]: pgmap v1845: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:10 compute-0 podman[286886]: 2026-01-22 00:19:10.972401343 +0000 UTC m=+0.090272355 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 22 00:19:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:11.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:12.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:12 compute-0 ceph-mon[74318]: pgmap v1846: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:13.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:19:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:14.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:14 compute-0 ceph-mon[74318]: pgmap v1847: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:15.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:16.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:16 compute-0 ceph-mon[74318]: pgmap v1848: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:17.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:18.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:18 compute-0 ceph-mon[74318]: pgmap v1849: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:18 compute-0 podman[286917]: 2026-01-22 00:19:18.966394845 +0000 UTC m=+0.071045790 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 00:19:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:19:18.983672) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769041158983751, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 418, "num_deletes": 255, "total_data_size": 365881, "memory_usage": 375192, "flush_reason": "Manual Compaction"}
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769041158989236, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 362619, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40641, "largest_seqno": 41058, "table_properties": {"data_size": 360139, "index_size": 580, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5717, "raw_average_key_size": 17, "raw_value_size": 355272, "raw_average_value_size": 1113, "num_data_blocks": 26, "num_entries": 319, "num_filter_entries": 319, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769041137, "oldest_key_time": 1769041137, "file_creation_time": 1769041158, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 5624 microseconds, and 2554 cpu microseconds.
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:19:18.989335) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 362619 bytes OK
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:19:18.989359) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:19:18.991813) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:19:18.991826) EVENT_LOG_v1 {"time_micros": 1769041158991822, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:19:18.991848) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 363300, prev total WAL file size 363300, number of live WAL files 2.
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:19:18.992511) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323534' seq:72057594037927935, type:22 .. '6C6F676D0031353035' seq:0, type:0; will stop at (end)
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(354KB)], [89(10100KB)]
Jan 22 00:19:18 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769041158993186, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 10705374, "oldest_snapshot_seqno": -1}
Jan 22 00:19:19 compute-0 ceph-mon[74318]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6253 keys, 10597701 bytes, temperature: kUnknown
Jan 22 00:19:19 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769041159142821, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 10597701, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10555706, "index_size": 25245, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 161833, "raw_average_key_size": 25, "raw_value_size": 10442636, "raw_average_value_size": 1670, "num_data_blocks": 1013, "num_entries": 6253, "num_filter_entries": 6253, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769037769, "oldest_key_time": 0, "file_creation_time": 1769041158, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "756e4229-f67c-4e5b-91a0-5975df843718", "db_session_id": "L1WW76NSVK36J4VFL8VG", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Jan 22 00:19:19 compute-0 ceph-mon[74318]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 00:19:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:19:19.143192) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 10597701 bytes
Jan 22 00:19:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:19:19.144738) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.5 rd, 70.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.9 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(58.7) write-amplify(29.2) OK, records in: 6771, records dropped: 518 output_compression: NoCompression
Jan 22 00:19:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:19:19.144759) EVENT_LOG_v1 {"time_micros": 1769041159144749, "job": 52, "event": "compaction_finished", "compaction_time_micros": 149713, "compaction_time_cpu_micros": 43860, "output_level": 6, "num_output_files": 1, "total_output_size": 10597701, "num_input_records": 6771, "num_output_records": 6253, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 00:19:19 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:19:19 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769041159144971, "job": 52, "event": "table_file_deletion", "file_number": 91}
Jan 22 00:19:19 compute-0 ceph-mon[74318]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 00:19:19 compute-0 ceph-mon[74318]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769041159147174, "job": 52, "event": "table_file_deletion", "file_number": 89}
Jan 22 00:19:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:19:18.992228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:19:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:19:19.147353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:19:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:19:19.147364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:19:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:19:19.147366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:19:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:19:19.147368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:19:19 compute-0 ceph-mon[74318]: rocksdb: (Original Log Time 2026/01/22-00:19:19.147370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 00:19:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:19.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:20.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:21 compute-0 ceph-mon[74318]: pgmap v1850: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:21.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:21 compute-0 sudo[286939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:19:21 compute-0 sudo[286939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:21 compute-0 sudo[286939]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:22 compute-0 sudo[286964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:19:22 compute-0 sudo[286964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:22 compute-0 sudo[286964]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:22.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:22 compute-0 ceph-mon[74318]: pgmap v1851: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:23.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:19:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:19:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:24.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:19:24 compute-0 ceph-mon[74318]: pgmap v1852: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:25.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:26.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:27 compute-0 ceph-mon[74318]: pgmap v1853: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1569279708' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:19:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/1569279708' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:19:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/209938161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:19:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:27.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3053891078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:19:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:28.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:19:29 compute-0 ceph-mon[74318]: pgmap v1854: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:29.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:30.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:31 compute-0 ceph-mon[74318]: pgmap v1855: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:31.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:31 compute-0 nova_compute[247516]: 2026-01-22 00:19:31.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:19:31 compute-0 nova_compute[247516]: 2026-01-22 00:19:31.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:19:31 compute-0 nova_compute[247516]: 2026-01-22 00:19:31.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:19:32 compute-0 nova_compute[247516]: 2026-01-22 00:19:32.015 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 00:19:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:32.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:33 compute-0 ceph-mon[74318]: pgmap v1856: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:33.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:33 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:19:33 compute-0 nova_compute[247516]: 2026-01-22 00:19:33.991 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:19:33 compute-0 nova_compute[247516]: 2026-01-22 00:19:33.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:19:33 compute-0 nova_compute[247516]: 2026-01-22 00:19:33.992 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 00:19:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:34.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:35 compute-0 ceph-mon[74318]: pgmap v1857: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:35.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:36 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3228444330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:19:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:36.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:37 compute-0 ceph-mon[74318]: pgmap v1858: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:37 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2247275402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:19:37 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:37 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:37 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:37.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:38 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:38 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:38 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:38.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:38 compute-0 sudo[286997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:19:38 compute-0 sudo[286997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:38 compute-0 sudo[286997]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:38 compute-0 sudo[287022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:19:38 compute-0 sudo[287022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:38 compute-0 sudo[287022]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:38 compute-0 sudo[287047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:19:38 compute-0 sudo[287047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:38 compute-0 sudo[287047]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:38 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:19:39 compute-0 sudo[287072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 00:19:39 compute-0 sudo[287072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Optimize plan auto_2026-01-22_00:19:39
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [balancer INFO root] do_upmap
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'backups', 'volumes', 'vms', 'default.rgw.log', 'default.rgw.control']
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [balancer INFO root] prepared 0/10 changes
Jan 22 00:19:39 compute-0 sudo[287072]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:19:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:19:39 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:19:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 22 00:19:39 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:19:39 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 22 00:19:39 compute-0 ceph-mgr[74614]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 22 00:19:39 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:39 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:39 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:39.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:39 compute-0 ceph-mon[74318]: pgmap v1859: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:40 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:19:40 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 6ffd93b1-47db-45a0-a343-afa0631a33dc does not exist
Jan 22 00:19:40 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 61cb4b8f-7479-44a6-bb4c-43d2c2ace29e does not exist
Jan 22 00:19:40 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 9ce906cb-acd8-40c6-97af-7ea6bca78267 does not exist
Jan 22 00:19:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 22 00:19:40 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:19:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 22 00:19:40 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:19:40 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:19:40 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:19:40 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:40 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:40 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:40.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:40 compute-0 sudo[287128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:19:40 compute-0 sudo[287128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:40 compute-0 sudo[287128]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:40 compute-0 sudo[287153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:19:40 compute-0 sudo[287153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:40 compute-0 sudo[287153]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:40 compute-0 sudo[287178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:19:40 compute-0 sudo[287178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:40 compute-0 sudo[287178]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:40 compute-0 sudo[287203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 00:19:40 compute-0 sudo[287203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:40 compute-0 podman[287269]: 2026-01-22 00:19:40.717338326 +0000 UTC m=+0.042937610 container create a7c14642ec6b36e4a307751aa12150f86b84126498ec47f720e81380eb29445d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 00:19:40 compute-0 systemd[1]: Started libpod-conmon-a7c14642ec6b36e4a307751aa12150f86b84126498ec47f720e81380eb29445d.scope.
Jan 22 00:19:40 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:19:40 compute-0 podman[287269]: 2026-01-22 00:19:40.694999934 +0000 UTC m=+0.020599258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:19:40 compute-0 podman[287269]: 2026-01-22 00:19:40.804828604 +0000 UTC m=+0.130427918 container init a7c14642ec6b36e4a307751aa12150f86b84126498ec47f720e81380eb29445d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:19:40 compute-0 podman[287269]: 2026-01-22 00:19:40.814252876 +0000 UTC m=+0.139852180 container start a7c14642ec6b36e4a307751aa12150f86b84126498ec47f720e81380eb29445d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Jan 22 00:19:40 compute-0 hopeful_visvesvaraya[287285]: 167 167
Jan 22 00:19:40 compute-0 podman[287269]: 2026-01-22 00:19:40.820731656 +0000 UTC m=+0.146330960 container attach a7c14642ec6b36e4a307751aa12150f86b84126498ec47f720e81380eb29445d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 00:19:40 compute-0 systemd[1]: libpod-a7c14642ec6b36e4a307751aa12150f86b84126498ec47f720e81380eb29445d.scope: Deactivated successfully.
Jan 22 00:19:40 compute-0 podman[287269]: 2026-01-22 00:19:40.82183337 +0000 UTC m=+0.147432664 container died a7c14642ec6b36e4a307751aa12150f86b84126498ec47f720e81380eb29445d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 00:19:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-37d2985120256dc5f14b91f22357c6d48cc7059cb52e48b69ab5e6aba8b500e1-merged.mount: Deactivated successfully.
Jan 22 00:19:40 compute-0 podman[287269]: 2026-01-22 00:19:40.876749159 +0000 UTC m=+0.202348453 container remove a7c14642ec6b36e4a307751aa12150f86b84126498ec47f720e81380eb29445d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:19:40 compute-0 systemd[1]: libpod-conmon-a7c14642ec6b36e4a307751aa12150f86b84126498ec47f720e81380eb29445d.scope: Deactivated successfully.
Jan 22 00:19:40 compute-0 nova_compute[247516]: 2026-01-22 00:19:40.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:19:40 compute-0 nova_compute[247516]: 2026-01-22 00:19:40.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:19:41 compute-0 podman[287311]: 2026-01-22 00:19:41.083061345 +0000 UTC m=+0.061060002 container create 7530ffd481dddd7d7b7ee197cda23f601891c4cb7baf1c1cab4c7c5f702941db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:19:41 compute-0 systemd[1]: Started libpod-conmon-7530ffd481dddd7d7b7ee197cda23f601891c4cb7baf1c1cab4c7c5f702941db.scope.
Jan 22 00:19:41 compute-0 podman[287311]: 2026-01-22 00:19:41.064864611 +0000 UTC m=+0.042863288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:19:41 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f2249376c452c2ddeeacadbf100b68efd8bdd0979053db28940167a016dedb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f2249376c452c2ddeeacadbf100b68efd8bdd0979053db28940167a016dedb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f2249376c452c2ddeeacadbf100b68efd8bdd0979053db28940167a016dedb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f2249376c452c2ddeeacadbf100b68efd8bdd0979053db28940167a016dedb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f2249376c452c2ddeeacadbf100b68efd8bdd0979053db28940167a016dedb6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 00:19:41 compute-0 ceph-mon[74318]: pgmap v1860: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:19:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 00:19:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:19:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 00:19:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 00:19:41 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:19:41 compute-0 podman[287311]: 2026-01-22 00:19:41.179832579 +0000 UTC m=+0.157831296 container init 7530ffd481dddd7d7b7ee197cda23f601891c4cb7baf1c1cab4c7c5f702941db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 00:19:41 compute-0 podman[287311]: 2026-01-22 00:19:41.201798059 +0000 UTC m=+0.179796716 container start 7530ffd481dddd7d7b7ee197cda23f601891c4cb7baf1c1cab4c7c5f702941db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_aryabhata, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 00:19:41 compute-0 podman[287311]: 2026-01-22 00:19:41.210663774 +0000 UTC m=+0.188662441 container attach 7530ffd481dddd7d7b7ee197cda23f601891c4cb7baf1c1cab4c7c5f702941db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 22 00:19:41 compute-0 podman[287324]: 2026-01-22 00:19:41.254517301 +0000 UTC m=+0.125560467 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 00:19:41 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:41 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:41 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:41 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:41.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:42 compute-0 competent_aryabhata[287328]: --> passed data devices: 0 physical, 1 LVM
Jan 22 00:19:42 compute-0 competent_aryabhata[287328]: --> relative data size: 1.0
Jan 22 00:19:42 compute-0 competent_aryabhata[287328]: --> All data devices are unavailable
Jan 22 00:19:42 compute-0 systemd[1]: libpod-7530ffd481dddd7d7b7ee197cda23f601891c4cb7baf1c1cab4c7c5f702941db.scope: Deactivated successfully.
Jan 22 00:19:42 compute-0 podman[287311]: 2026-01-22 00:19:42.055714616 +0000 UTC m=+1.033713293 container died 7530ffd481dddd7d7b7ee197cda23f601891c4cb7baf1c1cab4c7c5f702941db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 00:19:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f2249376c452c2ddeeacadbf100b68efd8bdd0979053db28940167a016dedb6-merged.mount: Deactivated successfully.
Jan 22 00:19:42 compute-0 podman[287311]: 2026-01-22 00:19:42.113394821 +0000 UTC m=+1.091393478 container remove 7530ffd481dddd7d7b7ee197cda23f601891c4cb7baf1c1cab4c7c5f702941db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_aryabhata, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 00:19:42 compute-0 systemd[1]: libpod-conmon-7530ffd481dddd7d7b7ee197cda23f601891c4cb7baf1c1cab4c7c5f702941db.scope: Deactivated successfully.
Jan 22 00:19:42 compute-0 sudo[287203]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:42 compute-0 sudo[287382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:19:42 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:42 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:42 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:42.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:42 compute-0 sudo[287382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:42 compute-0 sudo[287382]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:42 compute-0 sudo[287406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:19:42 compute-0 sudo[287406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:42 compute-0 sudo[287406]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:42 compute-0 sudo[287426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:19:42 compute-0 ceph-mon[74318]: pgmap v1861: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:42 compute-0 sudo[287426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:42 compute-0 sudo[287426]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:42 compute-0 sudo[287456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:19:42 compute-0 sudo[287456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:42 compute-0 sudo[287456]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:42 compute-0 sudo[287483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:19:42 compute-0 sudo[287483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:42 compute-0 sudo[287483]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:42 compute-0 sudo[287508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- lvm list --format json
Jan 22 00:19:42 compute-0 sudo[287508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:42 compute-0 podman[287574]: 2026-01-22 00:19:42.851080372 +0000 UTC m=+0.060203555 container create e5b9c441d68406d4d619b3dbe2ac0c3ad3e85732e8455caf01d18cbdb91e66db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 00:19:42 compute-0 systemd[1]: Started libpod-conmon-e5b9c441d68406d4d619b3dbe2ac0c3ad3e85732e8455caf01d18cbdb91e66db.scope.
Jan 22 00:19:42 compute-0 podman[287574]: 2026-01-22 00:19:42.817988648 +0000 UTC m=+0.027111871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:19:42 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:19:42 compute-0 podman[287574]: 2026-01-22 00:19:42.942828911 +0000 UTC m=+0.151952144 container init e5b9c441d68406d4d619b3dbe2ac0c3ad3e85732e8455caf01d18cbdb91e66db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 00:19:42 compute-0 podman[287574]: 2026-01-22 00:19:42.954392209 +0000 UTC m=+0.163515402 container start e5b9c441d68406d4d619b3dbe2ac0c3ad3e85732e8455caf01d18cbdb91e66db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 00:19:42 compute-0 podman[287574]: 2026-01-22 00:19:42.957845196 +0000 UTC m=+0.166968389 container attach e5b9c441d68406d4d619b3dbe2ac0c3ad3e85732e8455caf01d18cbdb91e66db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 00:19:42 compute-0 flamboyant_poincare[287590]: 167 167
Jan 22 00:19:42 compute-0 systemd[1]: libpod-e5b9c441d68406d4d619b3dbe2ac0c3ad3e85732e8455caf01d18cbdb91e66db.scope: Deactivated successfully.
Jan 22 00:19:42 compute-0 podman[287574]: 2026-01-22 00:19:42.964010747 +0000 UTC m=+0.173133930 container died e5b9c441d68406d4d619b3dbe2ac0c3ad3e85732e8455caf01d18cbdb91e66db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 00:19:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-db5db53a792edbdeace545f136941437b74cd3f101033de1815cea3df2bfbe81-merged.mount: Deactivated successfully.
Jan 22 00:19:42 compute-0 nova_compute[247516]: 2026-01-22 00:19:42.990 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:19:43 compute-0 podman[287574]: 2026-01-22 00:19:43.005363517 +0000 UTC m=+0.214486740 container remove e5b9c441d68406d4d619b3dbe2ac0c3ad3e85732e8455caf01d18cbdb91e66db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:19:43 compute-0 systemd[1]: libpod-conmon-e5b9c441d68406d4d619b3dbe2ac0c3ad3e85732e8455caf01d18cbdb91e66db.scope: Deactivated successfully.
Jan 22 00:19:43 compute-0 nova_compute[247516]: 2026-01-22 00:19:43.013 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:19:43 compute-0 podman[287612]: 2026-01-22 00:19:43.192031953 +0000 UTC m=+0.055246700 container create 09e856c0462cd06cad444ef02969ddfb39e4a0d1a2ac24ff458ed6f99c2f278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 00:19:43 compute-0 systemd[1]: Started libpod-conmon-09e856c0462cd06cad444ef02969ddfb39e4a0d1a2ac24ff458ed6f99c2f278d.scope.
Jan 22 00:19:43 compute-0 podman[287612]: 2026-01-22 00:19:43.158864337 +0000 UTC m=+0.022079074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:19:43 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5857f5c57e8fd2a4b57e687eebd8763df5203758f1a81c5f219e962c47bcd422/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5857f5c57e8fd2a4b57e687eebd8763df5203758f1a81c5f219e962c47bcd422/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:19:43 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5857f5c57e8fd2a4b57e687eebd8763df5203758f1a81c5f219e962c47bcd422/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5857f5c57e8fd2a4b57e687eebd8763df5203758f1a81c5f219e962c47bcd422/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:19:43 compute-0 podman[287612]: 2026-01-22 00:19:43.300525191 +0000 UTC m=+0.163739988 container init 09e856c0462cd06cad444ef02969ddfb39e4a0d1a2ac24ff458ed6f99c2f278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banach, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:19:43 compute-0 podman[287612]: 2026-01-22 00:19:43.309380965 +0000 UTC m=+0.172595682 container start 09e856c0462cd06cad444ef02969ddfb39e4a0d1a2ac24ff458ed6f99c2f278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banach, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 00:19:43 compute-0 podman[287612]: 2026-01-22 00:19:43.314025589 +0000 UTC m=+0.177240336 container attach 09e856c0462cd06cad444ef02969ddfb39e4a0d1a2ac24ff458ed6f99c2f278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:19:43 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:43 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:43 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:43.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:43 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:19:44 compute-0 wonderful_banach[287629]: {
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:     "1": [
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:         {
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:             "devices": [
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:                 "/dev/loop3"
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:             ],
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:             "lv_name": "ceph_lv0",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:             "lv_size": "7511998464",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3759241a-7f1c-520d-ba17-879943ee2f00,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f45f4f4-edfc-474c-93fc-45d596171ed8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:             "lv_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:             "name": "ceph_lv0",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:             "tags": {
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:                 "ceph.block_uuid": "7tKRTc-6FRz-Ikmv-t96B-3BdT-QhbM-CySyF6",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:                 "ceph.cluster_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:                 "ceph.cluster_name": "ceph",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:                 "ceph.crush_device_class": "",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:                 "ceph.encrypted": "0",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:                 "ceph.osd_fsid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:                 "ceph.osd_id": "1",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:                 "ceph.type": "block",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:                 "ceph.vdo": "0"
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:             },
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:             "type": "block",
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:             "vg_name": "ceph_vg0"
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:         }
Jan 22 00:19:44 compute-0 wonderful_banach[287629]:     ]
Jan 22 00:19:44 compute-0 wonderful_banach[287629]: }
Jan 22 00:19:44 compute-0 systemd[1]: libpod-09e856c0462cd06cad444ef02969ddfb39e4a0d1a2ac24ff458ed6f99c2f278d.scope: Deactivated successfully.
Jan 22 00:19:44 compute-0 podman[287612]: 2026-01-22 00:19:44.097436064 +0000 UTC m=+0.960650791 container died 09e856c0462cd06cad444ef02969ddfb39e4a0d1a2ac24ff458ed6f99c2f278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banach, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 00:19:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5857f5c57e8fd2a4b57e687eebd8763df5203758f1a81c5f219e962c47bcd422-merged.mount: Deactivated successfully.
Jan 22 00:19:44 compute-0 podman[287612]: 2026-01-22 00:19:44.164626354 +0000 UTC m=+1.027841081 container remove 09e856c0462cd06cad444ef02969ddfb39e4a0d1a2ac24ff458ed6f99c2f278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banach, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:19:44 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:44 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:44 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:44.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:44 compute-0 systemd[1]: libpod-conmon-09e856c0462cd06cad444ef02969ddfb39e4a0d1a2ac24ff458ed6f99c2f278d.scope: Deactivated successfully.
Jan 22 00:19:44 compute-0 sudo[287508]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:44 compute-0 sudo[287654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:19:44 compute-0 sudo[287654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:44 compute-0 sudo[287654]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:44 compute-0 ceph-mon[74318]: pgmap v1862: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:44 compute-0 sudo[287679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 00:19:44 compute-0 sudo[287679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:44 compute-0 sudo[287679]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:44 compute-0 sudo[287704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:19:44 compute-0 sudo[287704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:44 compute-0 sudo[287704]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:44 compute-0 sudo[287729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3759241a-7f1c-520d-ba17-879943ee2f00/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3759241a-7f1c-520d-ba17-879943ee2f00 -- raw list --format json
Jan 22 00:19:44 compute-0 sudo[287729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:44 compute-0 podman[287793]: 2026-01-22 00:19:44.919400812 +0000 UTC m=+0.039731820 container create 7a89bc5e3f68a98520d84245fc4f69d40d59f8e68aa1f62c77d674d067f2b64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_meitner, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 00:19:44 compute-0 systemd[1]: Started libpod-conmon-7a89bc5e3f68a98520d84245fc4f69d40d59f8e68aa1f62c77d674d067f2b64c.scope.
Jan 22 00:19:44 compute-0 nova_compute[247516]: 2026-01-22 00:19:44.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:19:44 compute-0 nova_compute[247516]: 2026-01-22 00:19:44.993 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:19:45 compute-0 podman[287793]: 2026-01-22 00:19:44.904822111 +0000 UTC m=+0.025153139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:19:45 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:19:45 compute-0 podman[287793]: 2026-01-22 00:19:45.016461837 +0000 UTC m=+0.136792885 container init 7a89bc5e3f68a98520d84245fc4f69d40d59f8e68aa1f62c77d674d067f2b64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 00:19:45 compute-0 nova_compute[247516]: 2026-01-22 00:19:45.047 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:19:45 compute-0 nova_compute[247516]: 2026-01-22 00:19:45.048 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:19:45 compute-0 nova_compute[247516]: 2026-01-22 00:19:45.048 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:19:45 compute-0 nova_compute[247516]: 2026-01-22 00:19:45.048 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 00:19:45 compute-0 nova_compute[247516]: 2026-01-22 00:19:45.049 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:19:45 compute-0 podman[287793]: 2026-01-22 00:19:45.050978284 +0000 UTC m=+0.171309302 container start 7a89bc5e3f68a98520d84245fc4f69d40d59f8e68aa1f62c77d674d067f2b64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_meitner, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:19:45 compute-0 podman[287793]: 2026-01-22 00:19:45.055426703 +0000 UTC m=+0.175757731 container attach 7a89bc5e3f68a98520d84245fc4f69d40d59f8e68aa1f62c77d674d067f2b64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_meitner, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 00:19:45 compute-0 goofy_meitner[287809]: 167 167
Jan 22 00:19:45 compute-0 systemd[1]: libpod-7a89bc5e3f68a98520d84245fc4f69d40d59f8e68aa1f62c77d674d067f2b64c.scope: Deactivated successfully.
Jan 22 00:19:45 compute-0 conmon[287809]: conmon 7a89bc5e3f68a98520d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a89bc5e3f68a98520d84245fc4f69d40d59f8e68aa1f62c77d674d067f2b64c.scope/container/memory.events
Jan 22 00:19:45 compute-0 podman[287793]: 2026-01-22 00:19:45.059274171 +0000 UTC m=+0.179605179 container died 7a89bc5e3f68a98520d84245fc4f69d40d59f8e68aa1f62c77d674d067f2b64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_meitner, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 00:19:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c2df4b8ee07cebe39662817c434b0c1dcfdaf9ea7e737b1064cbf27c6d49dba-merged.mount: Deactivated successfully.
Jan 22 00:19:45 compute-0 podman[287793]: 2026-01-22 00:19:45.095223074 +0000 UTC m=+0.215554072 container remove 7a89bc5e3f68a98520d84245fc4f69d40d59f8e68aa1f62c77d674d067f2b64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_meitner, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 00:19:45 compute-0 systemd[1]: libpod-conmon-7a89bc5e3f68a98520d84245fc4f69d40d59f8e68aa1f62c77d674d067f2b64c.scope: Deactivated successfully.
Jan 22 00:19:45 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:45 compute-0 podman[287852]: 2026-01-22 00:19:45.288208406 +0000 UTC m=+0.047166860 container create 9e8a0ff86d8c5eb963caf000031316d47b147c3c64f7a872f31b07242578f9ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mirzakhani, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 00:19:45 compute-0 systemd[1]: Started libpod-conmon-9e8a0ff86d8c5eb963caf000031316d47b147c3c64f7a872f31b07242578f9ef.scope.
Jan 22 00:19:45 compute-0 podman[287852]: 2026-01-22 00:19:45.266376831 +0000 UTC m=+0.025335325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 00:19:45 compute-0 systemd[1]: Started libcrun container.
Jan 22 00:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73dfd331135dfb7de8ecf2103b7dabffc802285a65bdb6728d841697dc01c270/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 00:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73dfd331135dfb7de8ecf2103b7dabffc802285a65bdb6728d841697dc01c270/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 00:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73dfd331135dfb7de8ecf2103b7dabffc802285a65bdb6728d841697dc01c270/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 00:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73dfd331135dfb7de8ecf2103b7dabffc802285a65bdb6728d841697dc01c270/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 00:19:45 compute-0 podman[287852]: 2026-01-22 00:19:45.403762283 +0000 UTC m=+0.162720767 container init 9e8a0ff86d8c5eb963caf000031316d47b147c3c64f7a872f31b07242578f9ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 00:19:45 compute-0 podman[287852]: 2026-01-22 00:19:45.416072494 +0000 UTC m=+0.175030978 container start 9e8a0ff86d8c5eb963caf000031316d47b147c3c64f7a872f31b07242578f9ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mirzakhani, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 00:19:45 compute-0 podman[287852]: 2026-01-22 00:19:45.420224702 +0000 UTC m=+0.179183176 container attach 9e8a0ff86d8c5eb963caf000031316d47b147c3c64f7a872f31b07242578f9ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 00:19:45 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:19:45 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/438370160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:19:45 compute-0 nova_compute[247516]: 2026-01-22 00:19:45.488 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:19:45 compute-0 nova_compute[247516]: 2026-01-22 00:19:45.687 247523 WARNING nova.virt.libvirt.driver [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 00:19:45 compute-0 nova_compute[247516]: 2026-01-22 00:19:45.689 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5133MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 00:19:45 compute-0 nova_compute[247516]: 2026-01-22 00:19:45.689 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:19:45 compute-0 nova_compute[247516]: 2026-01-22 00:19:45.690 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:19:45 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:45 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:45 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:45.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:45 compute-0 nova_compute[247516]: 2026-01-22 00:19:45.795 247523 INFO nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Instance b246822e-62e5-45d0-84c6-8abd60cdbeb0 has allocations against this compute host but is not found in the database.
Jan 22 00:19:45 compute-0 nova_compute[247516]: 2026-01-22 00:19:45.796 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 00:19:45 compute-0 nova_compute[247516]: 2026-01-22 00:19:45.796 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 00:19:45 compute-0 nova_compute[247516]: 2026-01-22 00:19:45.883 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 00:19:46 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:46 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:46 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:46.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 00:19:46 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/24506674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:19:46 compute-0 nova_compute[247516]: 2026-01-22 00:19:46.347 247523 DEBUG oslo_concurrency.processutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 00:19:46 compute-0 nova_compute[247516]: 2026-01-22 00:19:46.358 247523 DEBUG nova.compute.provider_tree [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed in ProviderTree for provider: c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 00:19:46 compute-0 musing_mirzakhani[287868]: {
Jan 22 00:19:46 compute-0 musing_mirzakhani[287868]:     "4f45f4f4-edfc-474c-93fc-45d596171ed8": {
Jan 22 00:19:46 compute-0 musing_mirzakhani[287868]:         "ceph_fsid": "3759241a-7f1c-520d-ba17-879943ee2f00",
Jan 22 00:19:46 compute-0 musing_mirzakhani[287868]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 00:19:46 compute-0 musing_mirzakhani[287868]:         "osd_id": 1,
Jan 22 00:19:46 compute-0 musing_mirzakhani[287868]:         "osd_uuid": "4f45f4f4-edfc-474c-93fc-45d596171ed8",
Jan 22 00:19:46 compute-0 musing_mirzakhani[287868]:         "type": "bluestore"
Jan 22 00:19:46 compute-0 musing_mirzakhani[287868]:     }
Jan 22 00:19:46 compute-0 musing_mirzakhani[287868]: }
Jan 22 00:19:46 compute-0 nova_compute[247516]: 2026-01-22 00:19:46.397 247523 DEBUG nova.scheduler.client.report [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Inventory has not changed for provider c0ebcd59-c8be-41e3-9c46-a4b74f020ea8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 00:19:46 compute-0 nova_compute[247516]: 2026-01-22 00:19:46.401 247523 DEBUG nova.compute.resource_tracker [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 00:19:46 compute-0 nova_compute[247516]: 2026-01-22 00:19:46.401 247523 DEBUG oslo_concurrency.lockutils [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:19:46 compute-0 systemd[1]: libpod-9e8a0ff86d8c5eb963caf000031316d47b147c3c64f7a872f31b07242578f9ef.scope: Deactivated successfully.
Jan 22 00:19:46 compute-0 podman[287914]: 2026-01-22 00:19:46.470440174 +0000 UTC m=+0.035838449 container died 9e8a0ff86d8c5eb963caf000031316d47b147c3c64f7a872f31b07242578f9ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mirzakhani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 00:19:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-73dfd331135dfb7de8ecf2103b7dabffc802285a65bdb6728d841697dc01c270-merged.mount: Deactivated successfully.
Jan 22 00:19:46 compute-0 podman[287914]: 2026-01-22 00:19:46.534903269 +0000 UTC m=+0.100301494 container remove 9e8a0ff86d8c5eb963caf000031316d47b147c3c64f7a872f31b07242578f9ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 00:19:46 compute-0 systemd[1]: libpod-conmon-9e8a0ff86d8c5eb963caf000031316d47b147c3c64f7a872f31b07242578f9ef.scope: Deactivated successfully.
Jan 22 00:19:46 compute-0 sudo[287729]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:46 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 22 00:19:46 compute-0 ceph-mon[74318]: pgmap v1863: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:46 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/438370160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:19:46 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/24506674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:19:47 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:19:47 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 22 00:19:47 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:47 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:19:47 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:47.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:19:47 compute-0 ceph-mon[74318]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:19:48 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 62a6269a-53c2-444b-9c2e-0863d2804894 does not exist
Jan 22 00:19:48 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev 6781897a-099a-4237-9065-fee1c26c45f5 does not exist
Jan 22 00:19:48 compute-0 ceph-mgr[74614]: [progress WARNING root] complete: ev d735cc73-f9fc-4799-8693-ab92146664c1 does not exist
Jan 22 00:19:48 compute-0 sudo[287928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:19:48 compute-0 sudo[287928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:48 compute-0 sudo[287928]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:48 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:48 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:48 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:48.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:48 compute-0 sudo[287953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 00:19:48 compute-0 sudo[287953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:19:48 compute-0 sudo[287953]: pam_unix(sudo:session): session closed for user root
Jan 22 00:19:48 compute-0 nova_compute[247516]: 2026-01-22 00:19:48.401 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:19:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:19:48.781 159050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 00:19:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:19:48.781 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 00:19:48 compute-0 ovn_metadata_agent[159045]: 2026-01-22 00:19:48.781 159050 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 00:19:48 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:19:49 compute-0 ceph-mon[74318]: pgmap v1864: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:19:49 compute-0 ceph-mon[74318]: from='mgr.14132 192.168.122.100:0/1060270195' entity='mgr.compute-0.boqcsl' 
Jan 22 00:19:49 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:49 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:49 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:49 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:49.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:49 compute-0 podman[287979]: 2026-01-22 00:19:49.985703555 +0000 UTC m=+0.085587280 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 00:19:50 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:50 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:50 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:50.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:51 compute-0 ceph-mon[74318]: pgmap v1865: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:51 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:51 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:51 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:51 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:51.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:52 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:52 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:19:52 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:52.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:19:52 compute-0 ceph-mon[74318]: pgmap v1866: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:53 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:53 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:53 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:53 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:53.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:53 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:19:54 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:54 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:54 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:54.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:54 compute-0 ceph-mon[74318]: pgmap v1867: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] _maybe_adjust
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 22 00:19:54 compute-0 ceph-mgr[74614]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 22 00:19:55 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:55 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:55 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:55 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:55.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:56 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:56 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:56 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:56.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:56 compute-0 ceph-mon[74318]: pgmap v1868: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:57 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:57 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:57 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:19:57 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:57.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:19:58 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:58 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:19:58 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:19:58.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:19:58 compute-0 ceph-mon[74318]: pgmap v1869: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:58 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:19:59 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:19:59 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:19:59 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:19:59 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:19:59.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:20:00 compute-0 ceph-mon[74318]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 22 00:20:00 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:00 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:20:00 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:00.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:20:01 compute-0 ceph-mon[74318]: pgmap v1870: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:01 compute-0 ceph-mon[74318]: overall HEALTH_OK
Jan 22 00:20:01 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:01 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:01 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:20:01 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:01.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:20:02 compute-0 ceph-mon[74318]: pgmap v1871: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:02 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:02 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:20:02 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:02.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:20:02 compute-0 sudo[288005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:20:02 compute-0 sudo[288005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:20:02 compute-0 sudo[288005]: pam_unix(sudo:session): session closed for user root
Jan 22 00:20:02 compute-0 sudo[288030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:20:02 compute-0 sudo[288030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:20:02 compute-0 sudo[288030]: pam_unix(sudo:session): session closed for user root
Jan 22 00:20:03 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:03 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:03 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:20:03 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:03.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:20:03 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:20:04 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:04 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:04 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:04.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:04 compute-0 ceph-mon[74318]: pgmap v1872: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:05 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:05 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:05 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:05 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:05.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:06 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:06 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:20:06 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:06.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:20:06 compute-0 sshd-session[288057]: Accepted publickey for zuul from 192.168.122.10 port 57332 ssh2: ECDSA SHA256:/+piIzp4HnVMEv5kM8LB/auXQYg4MokaRawV/nzvQXY
Jan 22 00:20:06 compute-0 systemd-logind[786]: New session 52 of user zuul.
Jan 22 00:20:06 compute-0 systemd[1]: Started Session 52 of User zuul.
Jan 22 00:20:06 compute-0 sshd-session[288057]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 00:20:06 compute-0 sudo[288061]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 22 00:20:06 compute-0 sudo[288061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 00:20:06 compute-0 ceph-mon[74318]: pgmap v1873: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:07 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:07 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:07 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:20:07 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:07.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:20:08 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:08 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:08 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:08.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:08 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:20:09 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.17901 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:09 compute-0 ceph-mon[74318]: pgmap v1874: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:20:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:20:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:20:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:20:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] scanning for idle connections..
Jan 22 00:20:09 compute-0 ceph-mgr[74614]: [volumes INFO mgr_util] cleaning up connections: []
Jan 22 00:20:09 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:09 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.17907 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:09 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:09 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:09 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:09.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:09 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.27791 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:10 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 22 00:20:10 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1783322373' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 00:20:10 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:10 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:20:10 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:10.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:20:10 compute-0 ceph-mon[74318]: from='client.17901 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:10 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1783322373' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 00:20:10 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.27733 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:10 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.27797 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:11 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.27739 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:11 compute-0 ceph-mon[74318]: pgmap v1875: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:11 compute-0 ceph-mon[74318]: from='client.17907 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:11 compute-0 ceph-mon[74318]: from='client.27791 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:11 compute-0 ceph-mon[74318]: from='client.27733 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:11 compute-0 ceph-mon[74318]: from='client.27797 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:11 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3147208821' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 00:20:11 compute-0 ceph-mon[74318]: from='client.27739 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:11 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:11 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:11 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:11 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:11.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:11 compute-0 podman[288338]: 2026-01-22 00:20:11.995050721 +0000 UTC m=+0.105265370 container health_status 125f2645672cc12beca2786a09a2997b52167eedbe5d588ae00110acb020998c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Jan 22 00:20:12 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:12 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:20:12 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:12.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:20:12 compute-0 ceph-mon[74318]: pgmap v1876: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:12 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3031067472' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 00:20:13 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:13 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:13 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:13 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:13.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:13 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:20:14 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:14 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:14 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:14.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:14 compute-0 ceph-mon[74318]: pgmap v1877: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:15 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:15 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:15 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:15 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:15.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:15 compute-0 ovs-vsctl[288417]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 22 00:20:16 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:16 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:20:16 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:16.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:20:16 compute-0 ceph-mon[74318]: pgmap v1878: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:16 compute-0 virtqemud[248175]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 22 00:20:16 compute-0 virtqemud[248175]: hostname: compute-0
Jan 22 00:20:16 compute-0 virtqemud[248175]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 22 00:20:16 compute-0 virtqemud[248175]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 22 00:20:16 compute-0 virtqemud[248175]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 22 00:20:17 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:17 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz asok_command: cache status {prefix=cache status} (starting...)
Jan 22 00:20:17 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz Can't run that command on an inactive MDS!
Jan 22 00:20:17 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz asok_command: client ls {prefix=client ls} (starting...)
Jan 22 00:20:17 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz Can't run that command on an inactive MDS!
Jan 22 00:20:17 compute-0 lvm[288758]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 00:20:17 compute-0 lvm[288758]: VG ceph_vg0 finished
Jan 22 00:20:17 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:17 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:20:17 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:17.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:20:18 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.17940 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:18 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:18 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:18 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:18.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:18 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz asok_command: damage ls {prefix=damage ls} (starting...)
Jan 22 00:20:18 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz Can't run that command on an inactive MDS!
Jan 22 00:20:18 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.27754 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 22 00:20:18 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2123313493' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 00:20:18 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz asok_command: dump loads {prefix=dump loads} (starting...)
Jan 22 00:20:18 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz Can't run that command on an inactive MDS!
Jan 22 00:20:18 compute-0 ceph-mon[74318]: pgmap v1879: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:18 compute-0 ceph-mon[74318]: from='client.17940 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:18 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.27757 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:18 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 22 00:20:18 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz Can't run that command on an inactive MDS!
Jan 22 00:20:18 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 22 00:20:18 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz Can't run that command on an inactive MDS!
Jan 22 00:20:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 00:20:18 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2961475142' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:20:18 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:20:19 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 22 00:20:19 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz Can't run that command on an inactive MDS!
Jan 22 00:20:19 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.27830 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:19 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 22 00:20:19 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz Can't run that command on an inactive MDS!
Jan 22 00:20:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 22 00:20:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 00:20:19 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 22 00:20:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2466096406' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 00:20:19 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.17979 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:19 compute-0 ceph-mgr[74614]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 00:20:19 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-22T00:20:19.360+0000 7fbf53a93640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 00:20:19 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.27784 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:19 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 22 00:20:19 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz Can't run that command on an inactive MDS!
Jan 22 00:20:19 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 22 00:20:19 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz Can't run that command on an inactive MDS!
Jan 22 00:20:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 22 00:20:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790696197' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 00:20:19 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.27805 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:19 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:19 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:19 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:19.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 22 00:20:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/428214589' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 22 00:20:19 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 22 00:20:19 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 00:20:19 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz asok_command: ops {prefix=ops} (starting...)
Jan 22 00:20:19 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz Can't run that command on an inactive MDS!
Jan 22 00:20:19 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.27866 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:19 compute-0 ceph-mgr[74614]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 00:20:19 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-22T00:20:19.920+0000 7fbf53a93640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 00:20:19 compute-0 ceph-mon[74318]: from='client.27754 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:19 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2123313493' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 00:20:19 compute-0 ceph-mon[74318]: from='client.27757 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:19 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2961475142' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:20:19 compute-0 ceph-mon[74318]: from='client.27830 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:19 compute-0 ceph-mon[74318]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 00:20:19 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/43061440' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 00:20:19 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2466096406' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 00:20:19 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/989358421' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:20:20 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18009 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:20 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:20 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:20:20 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:20.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:20:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 22 00:20:20 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3051500027' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 00:20:20 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18039 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:20 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz asok_command: session ls {prefix=session ls} (starting...)
Jan 22 00:20:20 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz Can't run that command on an inactive MDS!
Jan 22 00:20:20 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.27838 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:20 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-22T00:20:20.571+0000 7fbf53a93640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 00:20:20 compute-0 ceph-mgr[74614]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 00:20:20 compute-0 ceph-mds[93551]: mds.cephfs.compute-0.zcqesz asok_command: status {prefix=status} (starting...)
Jan 22 00:20:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 22 00:20:20 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4264288389' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 00:20:20 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 22 00:20:20 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2360219626' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 00:20:20 compute-0 podman[289189]: 2026-01-22 00:20:20.964485958 +0000 UTC m=+0.079142941 container health_status b1cca5f1339f68eebd7e071ea535195984235eea1c1d0e51d5b1f45339b05abb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '877cf8e1397de9e49ade65e1fd4a9913f65c00990623bbcd9ff2c8bf8d95ac4f-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-093836e5e43873c47b357280dda0d1e5a69099d97c9357314014c33ee4b351a2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 00:20:21 compute-0 ceph-mon[74318]: pgmap v1880: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:21 compute-0 ceph-mon[74318]: from='client.17979 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: from='client.27784 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/790696197' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: from='client.27805 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/428214589' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3626536708' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: from='client.27866 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2332043600' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3051500027' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3709914140' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2576152303' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/4264288389' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1118201188' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2967705595' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1113557065' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2360219626' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 22 00:20:21 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1112084408' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.27911 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:21 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 22 00:20:21 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1072161831' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 22 00:20:21 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3318728987' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.27932 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.27895 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:21 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18105 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:21 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-22T00:20:21.788+0000 7fbf53a93640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 00:20:21 compute-0 ceph-mgr[74614]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 00:20:21 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:21 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:20:21 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:21.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:20:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 22 00:20:22 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mon[74318]: from='client.18009 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mon[74318]: from='client.18039 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mon[74318]: from='client.27838 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/604120163' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1112084408' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3615992020' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3001858745' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1072161831' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/4198602164' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3318728987' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3556115769' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/240233998' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2608218119' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.27916 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 22 00:20:22 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1519144776' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 22 00:20:22 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:22 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:22 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:22.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 22 00:20:22 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1561607553' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 00:20:22 compute-0 sudo[289398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:20:22 compute-0 sudo[289398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:20:22 compute-0 sudo[289398]: pam_unix(sudo:session): session closed for user root
Jan 22 00:20:22 compute-0 sudo[289426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 00:20:22 compute-0 sudo[289426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 00:20:22 compute-0 sudo[289426]: pam_unix(sudo:session): session closed for user root
Jan 22 00:20:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 22 00:20:22 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 22 00:20:22 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3385154751' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18162 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:22 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.27989 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:22 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-22T00:20:22.913+0000 7fbf53a93640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 00:20:22 compute-0 ceph-mgr[74614]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.27911 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: pgmap v1881: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.27932 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.27895 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.18105 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1737749755' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1706173676' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1519144776' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1561607553' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2878944576' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1630045426' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1128974147' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/845543862' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3385154751' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/4148084802' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1017945803' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/4210625010' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18186 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 22 00:20:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3899751111' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:23 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28013 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18201 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.27967 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:23 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-22T00:20:23.490+0000 7fbf53a93640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 00:20:23 compute-0 ceph-mgr[74614]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 22 00:20:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 22 00:20:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1341153239' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28037 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:23 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:23 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:20:23 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:23.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:20:23 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18219 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:23 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 22 00:20:23 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3222352367' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 00:20:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:03.327512+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 1384448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903710 data_alloc: 218103808 data_used: 167936
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:04.327699+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 1384448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 106.975959778s of 107.836074829s, submitted: 256
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:05.327853+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 17973248 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:06.328020+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 143 ms_handle_reset con 0x55889f6d1800 session 0x55889f4414a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 17965056 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:07.328259+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 15712256 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:08.328395+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 143 heartbeat osd_stat(store_statfs(0x1bad98000/0x0/0x1bfc00000, data 0x1dbc2a3/0x1e86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 24018944 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170146 data_alloc: 218103808 data_used: 176128
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:09.328542+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 144 ms_handle_reset con 0x55889f6d1c00 session 0x55889c9983c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 144 heartbeat osd_stat(store_statfs(0x1ba598000/0x0/0x1bfc00000, data 0x25bc2a3/0x2686000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 24002560 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:10.328759+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 24002560 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:11.328962+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78733312 unmapped: 23986176 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:12.329155+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 23977984 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:13.329349+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 145 heartbeat osd_stat(store_statfs(0x1ba591000/0x0/0x1bfc00000, data 0x25bfa57/0x268c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 23977984 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176678 data_alloc: 218103808 data_used: 184320
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:14.329510+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 23977984 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:15.329700+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 23977984 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:16.329853+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 145 heartbeat osd_stat(store_statfs(0x1ba591000/0x0/0x1bfc00000, data 0x25bfa57/0x268c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 23977984 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:17.330022+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 23977984 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:18.330181+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 145 heartbeat osd_stat(store_statfs(0x1ba591000/0x0/0x1bfc00000, data 0x25bfa57/0x268c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 23977984 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176678 data_alloc: 218103808 data_used: 184320
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:19.330347+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 23977984 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:20.330475+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 23977984 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:21.330637+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 23977984 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:22.330778+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 23969792 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:23.330984+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 145 heartbeat osd_stat(store_statfs(0x1ba591000/0x0/0x1bfc00000, data 0x25bfa57/0x268c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 23969792 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:24.331139+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176678 data_alloc: 218103808 data_used: 184320
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 23969792 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:25.331322+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 145 heartbeat osd_stat(store_statfs(0x1ba591000/0x0/0x1bfc00000, data 0x25bfa57/0x268c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 23969792 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:26.331476+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 23969792 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28043 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:27.331624+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 23969792 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:28.331764+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 23969792 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:29.332094+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176678 data_alloc: 218103808 data_used: 184320
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 23969792 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:30.332539+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 23969792 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:31.332772+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 145 heartbeat osd_stat(store_statfs(0x1ba591000/0x0/0x1bfc00000, data 0x25bfa57/0x268c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 145 heartbeat osd_stat(store_statfs(0x1ba591000/0x0/0x1bfc00000, data 0x25bfa57/0x268c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 23969792 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:32.332949+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 23969792 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:33.333148+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 23969792 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:34.333303+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176678 data_alloc: 218103808 data_used: 184320
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 23969792 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:35.333493+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 23961600 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:36.333681+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 23961600 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:37.333846+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 145 heartbeat osd_stat(store_statfs(0x1ba591000/0x0/0x1bfc00000, data 0x25bfa57/0x268c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 23961600 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:38.334011+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 23961600 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:39.334138+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176678 data_alloc: 218103808 data_used: 184320
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:40.334333+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 23961600 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:41.334508+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 23961600 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:42.334750+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 23961600 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 145 heartbeat osd_stat(store_statfs(0x1ba591000/0x0/0x1bfc00000, data 0x25bfa57/0x268c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:43.334961+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 23961600 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 145 heartbeat osd_stat(store_statfs(0x1ba591000/0x0/0x1bfc00000, data 0x25bfa57/0x268c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:44.335143+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 23961600 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176678 data_alloc: 218103808 data_used: 184320
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:45.335316+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 23961600 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 145 heartbeat osd_stat(store_statfs(0x1ba591000/0x0/0x1bfc00000, data 0x25bfa57/0x268c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:46.335485+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 23961600 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:47.335717+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 23961600 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:48.335877+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 23961600 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889c4f0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:49.336057+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 23961600 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176678 data_alloc: 218103808 data_used: 184320
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:50.336287+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 23961600 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 145 handle_osd_map epochs [146,147], i have 145, src has [1,147]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 45.179637909s of 45.496875763s, submitted: 67
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 147 ms_handle_reset con 0x55889c4f0400 session 0x55889d6f8f00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 147 heartbeat osd_stat(store_statfs(0x1ba58a000/0x0/0x1bfc00000, data 0x25c336d/0x2693000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:51.336461+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 23945216 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:52.336636+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 23945216 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:53.336926+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 23945216 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 147 heartbeat osd_stat(store_statfs(0x1ba58a000/0x0/0x1bfc00000, data 0x25c336d/0x2693000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:54.337109+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 23945216 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186436 data_alloc: 218103808 data_used: 184320
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:55.337295+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 23945216 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:56.337532+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 23945216 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:57.337775+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 23945216 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:58.337973+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 23928832 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba587000/0x0/0x1bfc00000, data 0x25c4eac/0x2696000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:48:59.338116+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 23928832 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188706 data_alloc: 218103808 data_used: 184320
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba587000/0x0/0x1bfc00000, data 0x25c4eac/0x2696000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:00.338281+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 23928832 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889c4f0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889c4f0400 session 0x55889c4bde00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d029c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.933208466s of 10.002811432s, submitted: 34
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889d029c00 session 0x55889d2d9c20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:01.338456+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:02.338661+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:03.338865+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:04.339042+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188552 data_alloc: 218103808 data_used: 184320
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:05.343487+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:06.343648+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:07.343809+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:08.343941+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:09.344101+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188552 data_alloc: 218103808 data_used: 184320
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:10.344330+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:11.344526+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:12.344704+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:13.344964+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:14.345115+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188552 data_alloc: 218103808 data_used: 184320
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:15.345289+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:16.345443+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:17.345619+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:18.345832+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 23920640 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:19.346753+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 23904256 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188552 data_alloc: 218103808 data_used: 184320
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:20.346903+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 23904256 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:21.347199+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 23904256 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:22.347372+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 23904256 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:23.347665+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 23904256 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:24.347949+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 23904256 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188552 data_alloc: 218103808 data_used: 184320
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:25.348133+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 23904256 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:26.348323+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 23904256 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:27.348479+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 23904256 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:28.348677+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 23904256 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:29.348833+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 23904256 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188552 data_alloc: 218103808 data_used: 184320
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:30.349001+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889d647400 session 0x55889d059a40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 23904256 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889f6d0000 session 0x55889c30f680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:31.349182+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 5783552 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:32.349407+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 5783552 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:33.349642+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 5783552 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:34.349898+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 96944128 unmapped: 5775360 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241032 data_alloc: 234881024 data_used: 18468864
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:35.350114+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 96944128 unmapped: 5775360 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.376014709s of 35.380001068s, submitted: 1
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889f6d0400 session 0x55889f7101e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:36.350329+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 96911360 unmapped: 5808128 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba588000/0x0/0x1bfc00000, data 0x25c4efe/0x2696000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:37.350530+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 96911360 unmapped: 5808128 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:38.350931+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 96911360 unmapped: 5808128 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889c4f0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba588000/0x0/0x1bfc00000, data 0x25c4efe/0x2696000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889c4f0400 session 0x55889f6d9680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d029c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:39.351091+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889d029c00 session 0x55889f45d860
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 99360768 unmapped: 12812288 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320481 data_alloc: 234881024 data_used: 18468864
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889d647400 session 0x55889eb3d0e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:40.351260+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98893824 unmapped: 13279232 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1b9c53000/0x0/0x1bfc00000, data 0x2ef9efe/0x2fcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:41.351441+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98926592 unmapped: 13246464 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:42.351636+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98926592 unmapped: 13246464 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:43.351859+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98926592 unmapped: 13246464 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:44.352096+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98926592 unmapped: 13246464 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320481 data_alloc: 234881024 data_used: 18468864
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1b9c53000/0x0/0x1bfc00000, data 0x2ef9efe/0x2fcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:45.352275+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98926592 unmapped: 13246464 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:46.352450+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98926592 unmapped: 13246464 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:47.352606+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.057116508s of 11.268654823s, submitted: 63
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889f6d0000 session 0x55889c9c6b40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98942976 unmapped: 13230080 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889f6d1c00 session 0x55889f45d680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1b9c53000/0x0/0x1bfc00000, data 0x2ef9efe/0x2fcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:48.352769+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98967552 unmapped: 13205504 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:49.352975+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98967552 unmapped: 13205504 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248851 data_alloc: 234881024 data_used: 18468864
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:50.353166+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98967552 unmapped: 13205504 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:51.353349+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98975744 unmapped: 13197312 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:52.353512+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98975744 unmapped: 13197312 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:53.353715+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba506000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98975744 unmapped: 13197312 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:54.353881+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98975744 unmapped: 13197312 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248851 data_alloc: 234881024 data_used: 18468864
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:55.354042+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98975744 unmapped: 13197312 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:56.354216+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98975744 unmapped: 13197312 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:57.354370+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98975744 unmapped: 13197312 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:58.354598+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98975744 unmapped: 13197312 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:49:59.354811+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba506000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248851 data_alloc: 234881024 data_used: 18468864
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98975744 unmapped: 13197312 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:00.354975+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba506000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98975744 unmapped: 13197312 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:01.355148+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98975744 unmapped: 13197312 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:02.355317+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98975744 unmapped: 13197312 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:03.355550+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:04.355774+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248851 data_alloc: 234881024 data_used: 18468864
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:05.355981+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:06.356242+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba506000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:07.356406+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:08.356644+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:09.356803+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248851 data_alloc: 234881024 data_used: 18468864
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:10.356935+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:11.357106+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba506000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:12.357263+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:13.357468+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:14.357624+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248851 data_alloc: 234881024 data_used: 18468864
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:15.357756+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:16.357901+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba506000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:17.358068+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba506000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:18.358227+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:19.358405+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248851 data_alloc: 234881024 data_used: 18468864
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:20.358649+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:21.358851+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba506000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:22.359098+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:23.359301+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:24.359493+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248851 data_alloc: 234881024 data_used: 18468864
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:25.359682+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba506000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:26.359897+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:27.360073+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:28.360239+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba506000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:29.360405+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248851 data_alloc: 234881024 data_used: 18468864
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:30.360595+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba506000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:31.360794+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:32.360999+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:33.361234+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 13189120 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba506000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:34.361385+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98992128 unmapped: 13180928 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248851 data_alloc: 234881024 data_used: 18468864
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:35.361619+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98992128 unmapped: 13180928 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:36.361759+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98992128 unmapped: 13180928 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:37.361928+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98992128 unmapped: 13180928 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:38.362116+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 98992128 unmapped: 13180928 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba506000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889c4f0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889c4f0400 session 0x55889f6a4d20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d029c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:39.362267+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 11763712 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889d029c00 session 0x55889f44d4a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248851 data_alloc: 234881024 data_used: 19456000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:40.362497+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 11231232 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:41.362614+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 11231232 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba506000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:42.362790+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 11231232 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 55.432090759s of 55.554016113s, submitted: 35
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889d647400 session 0x55889f45da40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:43.363002+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 101990400 unmapped: 10182656 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:44.363171+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 101990400 unmapped: 10182656 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250149 data_alloc: 234881024 data_used: 19456000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:45.363342+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 101990400 unmapped: 10182656 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:46.363525+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 101990400 unmapped: 10182656 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:47.365136+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 101990400 unmapped: 10182656 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889f6d0000 session 0x55889f354960
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889f6d1800 session 0x55889f6a4b40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889c4f0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:48.365382+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 104628224 unmapped: 7544832 heap: 112173056 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889c4f0400 session 0x55889e930780
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:49.365533+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 13271040 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323552 data_alloc: 234881024 data_used: 19456000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:50.365707+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 13271040 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:51.365884+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1b9c8b000/0x0/0x1bfc00000, data 0x2ec2e9c/0x2f93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 13271040 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:52.366071+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 13271040 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:53.366282+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 13271040 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:54.366466+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 13271040 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323552 data_alloc: 234881024 data_used: 19456000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:55.366682+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 13271040 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:56.366816+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 13271040 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d029c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.118991852s of 14.295630455s, submitted: 39
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1b9c8b000/0x0/0x1bfc00000, data 0x2ec2e9c/0x2f93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:57.366978+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889d029c00 session 0x55889e5390e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 13434880 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:58.367162+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 13434880 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:50:59.367397+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 13434880 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255420 data_alloc: 234881024 data_used: 19456000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:00.367635+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 13434880 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:01.367772+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102424576 unmapped: 13426688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:02.367911+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:03.368106+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:04.368329+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255420 data_alloc: 234881024 data_used: 19456000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:05.368493+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:06.368639+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:07.368795+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:08.368935+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:09.369105+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255420 data_alloc: 234881024 data_used: 19456000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:10.369298+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:11.369437+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:12.369647+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:13.369822+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:14.369998+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255420 data_alloc: 234881024 data_used: 19456000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:15.370138+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:16.370329+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:17.370518+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:18.370667+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:19.370850+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255420 data_alloc: 234881024 data_used: 19456000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:20.371011+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:21.371148+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:22.371290+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:23.371468+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 13418496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:24.371600+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 13410304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255420 data_alloc: 234881024 data_used: 19456000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:25.371760+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 13410304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:26.371927+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 13410304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:27.372091+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 13410304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:28.372287+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 13410304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:29.372432+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 13410304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255420 data_alloc: 234881024 data_used: 19456000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:30.372607+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 13410304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:31.372821+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 13410304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:32.373014+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 13410304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:33.373212+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 13410304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:34.373410+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 13410304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255420 data_alloc: 234881024 data_used: 19456000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:35.373625+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.350292206s of 38.380451202s, submitted: 11
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 13410304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:36.373761+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 ms_handle_reset con 0x55889d647400 session 0x55889e9983c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 13410304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 149 ms_handle_reset con 0x55889f6d0000 session 0x55889e999680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889c4f0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 149 heartbeat osd_stat(store_statfs(0x1ba589000/0x0/0x1bfc00000, data 0x25c4e9c/0x2695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:37.374070+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0170c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 13312000 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 149 ms_handle_reset con 0x55889c4f0c00 session 0x55889e319c20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:38.374300+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 150 ms_handle_reset con 0x5588a0170c00 session 0x55889f45d860
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102367232 unmapped: 21880832 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:39.374490+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 21872640 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352837 data_alloc: 234881024 data_used: 19464192
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:40.374612+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 21864448 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:41.374765+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 21864448 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:42.374968+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 21864448 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 150 heartbeat osd_stat(store_statfs(0x1b990f000/0x0/0x1bfc00000, data 0x3238794/0x330d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:43.375142+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 21864448 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:44.375314+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 21864448 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352837 data_alloc: 234881024 data_used: 19464192
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:45.375504+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 21864448 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 150 heartbeat osd_stat(store_statfs(0x1b990f000/0x0/0x1bfc00000, data 0x3238794/0x330d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:46.375755+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 21864448 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:47.375914+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 150 heartbeat osd_stat(store_statfs(0x1b990f000/0x0/0x1bfc00000, data 0x3238794/0x330d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 21864448 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:48.376048+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 21864448 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889c4f0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.448004723s of 13.574493408s, submitted: 18
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 150 heartbeat osd_stat(store_statfs(0x1b990f000/0x0/0x1bfc00000, data 0x3238794/0x330d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:49.376228+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d029c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 151 ms_handle_reset con 0x55889c4f0400 session 0x55889c92f2c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 151 heartbeat osd_stat(store_statfs(0x1b9910000/0x0/0x1bfc00000, data 0x32387b7/0x330e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356358 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:50.376354+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102260736 unmapped: 21987328 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 152 ms_handle_reset con 0x55889d029c00 session 0x55889e51f4a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:51.376486+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 152 heartbeat osd_stat(store_statfs(0x1b990d000/0x0/0x1bfc00000, data 0x323a3ed/0x3310000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102301696 unmapped: 21946368 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:52.376675+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102277120 unmapped: 21970944 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 153 ms_handle_reset con 0x55889d647400 session 0x55889eb3c000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:53.377309+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102285312 unmapped: 21962752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:54.377503+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102309888 unmapped: 21938176 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277786 data_alloc: 234881024 data_used: 19484672
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:55.377631+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102309888 unmapped: 21938176 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:56.377815+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102309888 unmapped: 21938176 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 153 heartbeat osd_stat(store_statfs(0x1ba167000/0x0/0x1bfc00000, data 0x25cdd01/0x26a4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:57.377940+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102309888 unmapped: 21938176 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:58.378109+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102309888 unmapped: 21938176 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:51:59.378269+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102318080 unmapped: 21929984 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.601144791s of 10.853977203s, submitted: 91
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283366 data_alloc: 234881024 data_used: 19484672
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 155 ms_handle_reset con 0x55889f6d0000 session 0x55889f7112c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:00.378500+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 155 heartbeat osd_stat(store_statfs(0x1ba163000/0x0/0x1bfc00000, data 0x25d1519/0x26aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 155 heartbeat osd_stat(store_statfs(0x1ba163000/0x0/0x1bfc00000, data 0x25d1519/0x26aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:01.378722+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:02.378920+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 155 heartbeat osd_stat(store_statfs(0x1ba163000/0x0/0x1bfc00000, data 0x25d1519/0x26aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:03.379119+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:04.379284+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286340 data_alloc: 234881024 data_used: 19484672
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:05.379454+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:06.379639+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 156 heartbeat osd_stat(store_statfs(0x1ba160000/0x0/0x1bfc00000, data 0x25d3074/0x26ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:07.379790+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:08.380028+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:09.380220+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289314 data_alloc: 234881024 data_used: 19484672
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:10.380438+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:11.380614+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 heartbeat osd_stat(store_statfs(0x1ba15d000/0x0/0x1bfc00000, data 0x25d4bb3/0x26b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:12.380791+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:13.380990+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:14.381151+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289314 data_alloc: 234881024 data_used: 19484672
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:15.381343+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:16.381522+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:17.381691+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 heartbeat osd_stat(store_statfs(0x1ba15d000/0x0/0x1bfc00000, data 0x25d4bb3/0x26b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:18.381845+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:19.382015+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289314 data_alloc: 234881024 data_used: 19484672
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:20.382161+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 heartbeat osd_stat(store_statfs(0x1ba15d000/0x0/0x1bfc00000, data 0x25d4bb3/0x26b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:21.382284+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 heartbeat osd_stat(store_statfs(0x1ba15d000/0x0/0x1bfc00000, data 0x25d4bb3/0x26b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:22.382493+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:23.382767+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:24.382978+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289314 data_alloc: 234881024 data_used: 19484672
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:25.383112+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:26.383256+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:27.383431+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 heartbeat osd_stat(store_statfs(0x1ba15d000/0x0/0x1bfc00000, data 0x25d4bb3/0x26b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:28.383594+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:29.383751+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289314 data_alloc: 234881024 data_used: 19484672
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:30.383897+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:31.384032+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:32.384186+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:33.384460+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 heartbeat osd_stat(store_statfs(0x1ba15d000/0x0/0x1bfc00000, data 0x25d4bb3/0x26b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:34.384623+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289314 data_alloc: 234881024 data_used: 19484672
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:35.384765+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:36.384892+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:37.385019+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:38.385268+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 heartbeat osd_stat(store_statfs(0x1ba15d000/0x0/0x1bfc00000, data 0x25d4bb3/0x26b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:39.385426+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289314 data_alloc: 234881024 data_used: 19484672
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:40.385647+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:41.385811+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 heartbeat osd_stat(store_statfs(0x1ba15d000/0x0/0x1bfc00000, data 0x25d4bb3/0x26b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:42.385961+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:43.386143+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:44.386303+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 heartbeat osd_stat(store_statfs(0x1ba15d000/0x0/0x1bfc00000, data 0x25d4bb3/0x26b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289314 data_alloc: 234881024 data_used: 19484672
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:45.386523+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:46.386693+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:47.386846+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:48.387022+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:49.387199+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289314 data_alloc: 234881024 data_used: 19484672
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:50.387362+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 heartbeat osd_stat(store_statfs(0x1ba15d000/0x0/0x1bfc00000, data 0x25d4bb3/0x26b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:51.387615+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 heartbeat osd_stat(store_statfs(0x1ba15d000/0x0/0x1bfc00000, data 0x25d4bb3/0x26b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 22200320 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:52.387785+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102055936 unmapped: 22192128 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:53.387990+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102055936 unmapped: 22192128 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:54.388180+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102055936 unmapped: 22192128 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 heartbeat osd_stat(store_statfs(0x1ba15d000/0x0/0x1bfc00000, data 0x25d4bb3/0x26b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289314 data_alloc: 234881024 data_used: 19484672
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:55.388373+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102055936 unmapped: 22192128 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:56.388613+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102055936 unmapped: 22192128 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:57.388731+0000)
Jan 22 00:20:24 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28000 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102055936 unmapped: 22192128 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 heartbeat osd_stat(store_statfs(0x1ba15d000/0x0/0x1bfc00000, data 0x25d4bb3/0x26b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:58.388891+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102055936 unmapped: 22192128 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:52:59.389041+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102055936 unmapped: 22192128 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289314 data_alloc: 234881024 data_used: 19484672
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:00.389207+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102055936 unmapped: 22192128 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:01.389349+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102055936 unmapped: 22192128 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:02.389522+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102055936 unmapped: 22192128 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 heartbeat osd_stat(store_statfs(0x1ba15d000/0x0/0x1bfc00000, data 0x25d4bb3/0x26b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:03.389723+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102055936 unmapped: 22192128 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:04.389908+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102055936 unmapped: 22192128 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:05.390078+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289314 data_alloc: 234881024 data_used: 19484672
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 heartbeat osd_stat(store_statfs(0x1ba15d000/0x0/0x1bfc00000, data 0x25d4bb3/0x26b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102055936 unmapped: 22192128 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:06.390248+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102055936 unmapped: 22192128 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:07.390379+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102055936 unmapped: 22192128 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:08.390532+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102055936 unmapped: 22192128 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889c4f0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 69.012947083s of 69.087440491s, submitted: 48
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 heartbeat osd_stat(store_statfs(0x1ba15d000/0x0/0x1bfc00000, data 0x25d4bb3/0x26b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [0,0,0,0,0,1])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:09.390620+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 heartbeat osd_stat(store_statfs(0x1ba15d000/0x0/0x1bfc00000, data 0x25d4bb3/0x26b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 158 ms_handle_reset con 0x55889c4f0400 session 0x55889f44e1e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102072320 unmapped: 22175744 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 158 heartbeat osd_stat(store_statfs(0x1ba158000/0x0/0x1bfc00000, data 0x25d6c2f/0x26b5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:10.390768+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298105 data_alloc: 234881024 data_used: 19492864
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102072320 unmapped: 22175744 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 158 heartbeat osd_stat(store_statfs(0x1ba157000/0x0/0x1bfc00000, data 0x25d6c52/0x26b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:11.390911+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102072320 unmapped: 22175744 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:12.391087+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 158 heartbeat osd_stat(store_statfs(0x1ba157000/0x0/0x1bfc00000, data 0x25d6c52/0x26b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102072320 unmapped: 22175744 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:13.391270+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102072320 unmapped: 22175744 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:14.391461+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d029c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:15.391623+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301956 data_alloc: 234881024 data_used: 19492864
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 159 ms_handle_reset con 0x55889d029c00 session 0x55889f7110e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:16.391967+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:17.392143+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 159 heartbeat osd_stat(store_statfs(0x1ba152000/0x0/0x1bfc00000, data 0x25d8cbb/0x26bb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:18.392315+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:19.392469+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.252239227s of 10.481765747s, submitted: 35
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 160 ms_handle_reset con 0x55889d647400 session 0x55889f44fe00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:20.392678+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310345 data_alloc: 234881024 data_used: 19509248
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:21.392811+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:22.392951+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:23.393143+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 160 heartbeat osd_stat(store_statfs(0x1ba14f000/0x0/0x1bfc00000, data 0x25da914/0x26be000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:24.393328+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:25.393477+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310345 data_alloc: 234881024 data_used: 19509248
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:26.393659+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 160 heartbeat osd_stat(store_statfs(0x1ba14f000/0x0/0x1bfc00000, data 0x25da914/0x26be000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:27.393781+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:28.393922+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0170c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:29.394100+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 160 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.999652863s of 10.038755417s, submitted: 12
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 161 ms_handle_reset con 0x5588a0170c00 session 0x55889f45c000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:30.394332+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313047 data_alloc: 234881024 data_used: 19517440
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0171000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 22249472 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 161 heartbeat osd_stat(store_statfs(0x1ba14d000/0x0/0x1bfc00000, data 0x25dc5c1/0x26c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:31.394460+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 162 ms_handle_reset con 0x5588a0171000 session 0x55889e6092c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102014976 unmapped: 22233088 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889c4f0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:32.394608+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102031360 unmapped: 22216704 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 163 ms_handle_reset con 0x55889c4f0400 session 0x55889f701a40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:33.394793+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102064128 unmapped: 22183936 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:34.395011+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102064128 unmapped: 22183936 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 163 heartbeat osd_stat(store_statfs(0x1ba14b000/0x0/0x1bfc00000, data 0x25df6c5/0x26c2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:35.395248+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314624 data_alloc: 234881024 data_used: 19464192
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102064128 unmapped: 22183936 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:36.395434+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102064128 unmapped: 22183936 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:37.395655+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102064128 unmapped: 22183936 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:38.395832+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102072320 unmapped: 22175744 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:39.396001+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102072320 unmapped: 22175744 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:40.396163+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318622 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 164 heartbeat osd_stat(store_statfs(0x1ba148000/0x0/0x1bfc00000, data 0x25e1220/0x26c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102072320 unmapped: 22175744 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:41.396307+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102072320 unmapped: 22175744 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:42.396463+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.680390358s of 13.122662544s, submitted: 124
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:43.396647+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:44.396811+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:45.397001+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321596 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:46.397149+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba145000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:47.397308+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:48.397446+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:49.397629+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:50.397803+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba145000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321596 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:51.397959+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:52.398145+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:53.398343+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba145000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:54.398541+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:55.398743+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321596 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:56.398934+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba145000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:57.399086+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:58.399222+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba145000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:53:59.399349+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:00.399481+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321596 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:01.399624+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:02.399789+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:03.400066+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:04.400269+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba145000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:05.400754+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321596 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba145000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:06.401054+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:07.401217+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:08.401398+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:09.401595+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:10.401754+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321596 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba145000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:11.401937+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:12.402125+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:13.402320+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:14.402496+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:15.402637+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321596 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:16.402830+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba145000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:17.402988+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:18.403163+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:19.403308+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d029c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.251670837s of 37.262172699s, submitted: 14
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889d029c00 session 0x55889c8ea780
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889d647400 session 0x55889e930000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba145000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:20.403472+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322213 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:21.403616+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0170c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x5588a0170c00 session 0x55889f048f00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0171400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x5588a0171400 session 0x55889f3d7680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba145000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:22.403816+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:23.404015+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 22167552 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba145000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:24.404265+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:25.404455+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324747 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba145000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:26.404670+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba145000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:27.404961+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:28.405120+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:29.405283+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102088704 unmapped: 22159360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:30.405495+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324747 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889c4f0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.779645920s of 10.851481438s, submitted: 20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889c4f0400 session 0x55889f701c20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d029c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102096896 unmapped: 22151168 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889d029c00 session 0x55889e51ef00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:31.405642+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102285312 unmapped: 21962752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:32.405779+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889d647400 session 0x55889f4323c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0170c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba146000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x5588a0170c00 session 0x55889e98da40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:33.406013+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba146000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:34.406177+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:35.406323+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322526 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:36.406487+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba146000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:37.406720+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba146000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:38.406904+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 8388 writes, 30K keys, 8388 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8388 writes, 2056 syncs, 4.08 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1911 writes, 4450 keys, 1911 commit groups, 1.0 writes per commit group, ingest: 2.11 MB, 0.00 MB/s
                                           Interval WAL: 1911 writes, 846 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:39.407081+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba146000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:40.407280+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322526 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:41.407438+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:42.407662+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:43.407900+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:44.408090+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:45.408261+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba146000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322526 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:46.408439+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:47.408629+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:48.408824+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889befe800 session 0x55889bcc2f00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889befe800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba146000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:49.408975+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889c301000 session 0x55889f3d6780
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f3fd800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889c817400 session 0x55889f3cb2c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889c301000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba146000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:50.409141+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322526 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:51.409306+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba146000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:52.409477+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba146000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:53.409698+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:54.409854+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:55.410026+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322526 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba146000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:56.410254+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 21995520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:57.410713+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102268928 unmapped: 21979136 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:58.410894+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba146000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102268928 unmapped: 21979136 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:54:59.411066+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102268928 unmapped: 21979136 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:00.411225+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322526 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102268928 unmapped: 21979136 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:01.411392+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102268928 unmapped: 21979136 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:02.411615+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba146000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102268928 unmapped: 21979136 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:03.411937+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102268928 unmapped: 21979136 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:04.412149+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102268928 unmapped: 21979136 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:05.412374+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322526 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102268928 unmapped: 21979136 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:06.412617+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102268928 unmapped: 21979136 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:07.412825+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba146000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102268928 unmapped: 21979136 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:08.412983+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102277120 unmapped: 21970944 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:09.413184+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102285312 unmapped: 21962752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:10.413376+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322526 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102285312 unmapped: 21962752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:11.413588+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102285312 unmapped: 21962752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:12.413816+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba146000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102285312 unmapped: 21962752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:13.414036+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102285312 unmapped: 21962752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:14.414227+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba146000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102285312 unmapped: 21962752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:15.414445+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322526 data_alloc: 234881024 data_used: 19472384
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889d647400 session 0x55889f765680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102285312 unmapped: 21962752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0170c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:16.414596+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x5588a0170c00 session 0x55889f764780
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 21569536 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:17.414827+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 21569536 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:18.415049+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0171800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.381137848s of 47.844345093s, submitted: 197
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x5588a0171800 session 0x55889f434d20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102686720 unmapped: 21561344 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:19.415289+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1ba144000/0x0/0x1bfc00000, data 0x25e2dd1/0x26ca000, compress 0x0/0x0/0x0, omap 0x639, meta 0x33ef9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102686720 unmapped: 21561344 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:20.415619+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325496 data_alloc: 234881024 data_used: 19599360
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 102686720 unmapped: 21561344 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:21.415794+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0171c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x5588a0171c00 session 0x55889f440960
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a017dc00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x5588a017dc00 session 0x55889d058f00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105570304 unmapped: 18677760 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889d647400 session 0x55889f44ed20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:22.415976+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1400 session 0x55889f3c5c20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 18464768 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:23.416203+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1000 session 0x55889e317680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b7e6e000/0x0/0x1bfc00000, data 0x3719d6f/0x3800000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 18464768 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:24.416392+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 18464768 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b7e6e000/0x0/0x1bfc00000, data 0x3719d6f/0x3800000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:25.416593+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f71ac00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f71ac00 session 0x55889f6d9860
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1458762 data_alloc: 234881024 data_used: 19603456
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 18464768 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:26.416736+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f71a800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f71a800 session 0x55889e71cb40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105848832 unmapped: 18399232 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889d647400 session 0x55889d6f85a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:27.416934+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b7e6d000/0x0/0x1bfc00000, data 0x3719dd1/0x3801000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105848832 unmapped: 18399232 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:28.417118+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b7e6d000/0x0/0x1bfc00000, data 0x3719dd1/0x3801000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.137571335s of 10.375650406s, submitted: 49
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1000 session 0x55889d289680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105848832 unmapped: 18399232 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:29.417329+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105848832 unmapped: 18399232 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:30.417500+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456208 data_alloc: 234881024 data_used: 19542016
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105848832 unmapped: 18399232 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:31.417705+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1400 session 0x55889f3c4f00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b7e6c000/0x0/0x1bfc00000, data 0x3719e33/0x3802000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f71ac00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105873408 unmapped: 18374656 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:32.417869+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f71ac00 session 0x55889f7101e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0170c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x5588a0170c00 session 0x55889f433c20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 17965056 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:33.418125+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 17965056 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:34.418334+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889d647400 session 0x55889f700d20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106078208 unmapped: 18169856 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:35.418477+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1557815 data_alloc: 234881024 data_used: 19542016
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106078208 unmapped: 18169856 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:36.418654+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1000 session 0x55889f44d4a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 18161664 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:37.418817+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b71bb000/0x0/0x1bfc00000, data 0x43cae33/0x44b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b71bb000/0x0/0x1bfc00000, data 0x43cae33/0x44b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 18161664 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:38.418999+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 18161664 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:39.419224+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b71bb000/0x0/0x1bfc00000, data 0x43cae33/0x44b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106094592 unmapped: 18153472 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:40.419371+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1557815 data_alloc: 234881024 data_used: 19542016
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106102784 unmapped: 18145280 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:41.419522+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106102784 unmapped: 18145280 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:42.419724+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106102784 unmapped: 18145280 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:43.419920+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106102784 unmapped: 18145280 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b71bb000/0x0/0x1bfc00000, data 0x43cae33/0x44b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:44.420088+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106110976 unmapped: 18137088 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:45.420266+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1557815 data_alloc: 234881024 data_used: 19542016
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106110976 unmapped: 18137088 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:46.420456+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106110976 unmapped: 18137088 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:47.420616+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106110976 unmapped: 18137088 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:48.420810+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106110976 unmapped: 18137088 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:49.421008+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b71bb000/0x0/0x1bfc00000, data 0x43cae33/0x44b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106110976 unmapped: 18137088 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:50.421214+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1557815 data_alloc: 234881024 data_used: 19542016
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106110976 unmapped: 18137088 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:51.421393+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106119168 unmapped: 18128896 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:52.421685+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106127360 unmapped: 18120704 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:53.421934+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106127360 unmapped: 18120704 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:54.422173+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106127360 unmapped: 18120704 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:55.422349+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b71bb000/0x0/0x1bfc00000, data 0x43cae33/0x44b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.669828415s of 26.894210815s, submitted: 61
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558062 data_alloc: 234881024 data_used: 19546112
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1400 session 0x55889f433a40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106135552 unmapped: 18112512 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:56.422537+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106135552 unmapped: 18112512 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:57.422702+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f71ac00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f71ac00 session 0x55889e90c780
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106135552 unmapped: 18112512 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:58.422927+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0171800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x5588a0171800 session 0x55889e98cf00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106135552 unmapped: 18112512 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:55:59.423083+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889d647400 session 0x55889be9a780
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 18055168 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:00.423291+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1432786 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 18055168 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:01.423465+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b82f4000/0x0/0x1bfc00000, data 0x3293dc1/0x337a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 18055168 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:02.423650+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b82f4000/0x0/0x1bfc00000, data 0x3293dc1/0x337a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 18055168 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:03.423830+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 18046976 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:04.424024+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 18046976 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:05.424187+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1432786 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 18046976 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:06.424391+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.898041725s of 11.039869308s, submitted: 47
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1000 session 0x55889f6a5a40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 18038784 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:07.424601+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b82f4000/0x0/0x1bfc00000, data 0x3293dc1/0x337a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1400 session 0x55889f049c20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f71ac00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f71ac00 session 0x55889f6d8f00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105758720 unmapped: 18489344 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:08.424767+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105758720 unmapped: 18489344 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:09.424948+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0171c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x5588a0171c00 session 0x55889f45c3c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b82f4000/0x0/0x1bfc00000, data 0x3293dc1/0x337a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889d647400 session 0x55889e98c5a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105791488 unmapped: 18456576 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:10.425113+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340159 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105791488 unmapped: 18456576 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:11.425241+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105791488 unmapped: 18456576 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:12.425380+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8e91000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:13.425619+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105791488 unmapped: 18456576 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:14.425785+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105791488 unmapped: 18456576 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:15.425973+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105791488 unmapped: 18456576 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340159 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:16.426167+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105791488 unmapped: 18456576 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:17.426387+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 105791488 unmapped: 18456576 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.311623573s of 10.518497467s, submitted: 63
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:18.426595+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 17285120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1000 session 0x55889f433860
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:19.426770+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 17039360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:20.426948+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 16916480 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:21.427119+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 16916480 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:22.427331+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 16916480 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:23.427539+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 16916480 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:24.427779+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 16916480 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:25.427984+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 16908288 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:26.428159+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 16908288 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:27.428337+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 16908288 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:28.428544+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107347968 unmapped: 16900096 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:29.428809+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107347968 unmapped: 16900096 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.556907654s of 12.349334717s, submitted: 259
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1400 session 0x55889d6f9860
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:30.428983+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107356160 unmapped: 16891904 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:31.429159+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107356160 unmapped: 16891904 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:32.429352+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107356160 unmapped: 16891904 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:33.429619+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107356160 unmapped: 16891904 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:34.429831+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107356160 unmapped: 16891904 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:35.430015+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107356160 unmapped: 16891904 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:36.430141+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 16875520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:37.430292+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 16875520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:38.430426+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 16875520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:39.430642+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 16867328 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:40.430814+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 16867328 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:41.431055+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 16859136 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:42.431349+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 16859136 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:43.431642+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 16859136 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:44.431829+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 16859136 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:45.432002+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 16850944 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:46.432168+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 16850944 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:47.432421+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 16850944 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:48.432663+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 16850944 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:49.432925+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 16850944 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:50.433141+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 16850944 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:51.433312+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 16842752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:52.433544+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 16842752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:53.433814+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 16842752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:54.434050+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 16842752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:55.434205+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 16842752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:56.434324+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 16842752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:57.434491+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 16842752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:58.434682+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 16842752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:56:59.434897+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 16842752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:00.435109+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 16842752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:01.435359+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 16842752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:02.435619+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 16842752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:03.435949+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 16842752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:04.436262+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 16842752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:05.436444+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 16842752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:06.436624+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 16842752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:07.436812+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 16842752 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:08.437016+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 16834560 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:09.437336+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 16834560 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:10.437540+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 16834560 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:11.437775+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 16834560 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:12.437946+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 16834560 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:13.438171+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 16834560 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:14.438309+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 16834560 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:15.438471+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 16834560 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:16.438653+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 16834560 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:17.438775+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 16834560 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:18.438892+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 16834560 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:19.439036+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 16826368 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:20.439233+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 16826368 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:21.439399+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 16826368 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:22.439664+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 16826368 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:23.439929+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 16826368 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:24.440193+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 16826368 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:25.440445+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 16826368 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:26.440654+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 16826368 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:27.440848+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 16826368 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:28.441034+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 16826368 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:29.441230+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 16818176 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:30.441452+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 16818176 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:31.441749+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 16809984 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:32.441945+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 16809984 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:33.442187+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 16809984 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:34.442365+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 16809984 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:35.442533+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 16809984 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:36.442822+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 16809984 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:37.443093+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 16809984 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:38.443372+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 16809984 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:39.443674+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 16809984 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:40.443837+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 16809984 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:41.444029+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 16809984 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:42.444238+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 16809984 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:43.444439+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 16809984 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:44.444584+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 16809984 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:45.444796+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 16801792 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:46.444976+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 16801792 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:47.445148+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 16801792 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:48.445330+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 16801792 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:49.445500+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 16801792 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:50.445675+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 16801792 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:51.445886+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 16801792 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:52.446008+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 16801792 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:53.446290+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 16801792 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:54.446437+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 16801792 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:55.446641+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 16801792 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:56.446845+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339983 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107454464 unmapped: 16793600 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:57.447053+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107454464 unmapped: 16793600 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:58.447208+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107454464 unmapped: 16793600 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f71ac00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f71ac00 session 0x55889f6d8b40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0171c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:57:59.447422+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x5588a0171c00 session 0x55889f45dc20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 16875520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:00.447628+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 16875520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:01.447792+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1344303 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 16875520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 92.183456421s of 92.201728821s, submitted: 5
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889d647400 session 0x55889f6a4f00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:02.448001+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 16875520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:03.448268+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 16875520 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:04.448483+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1000 session 0x55889f3ca960
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 16859136 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1400 session 0x55889f418780
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:05.448684+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f71ac00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107724800 unmapped: 20717568 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f71ac00 session 0x55889f34ef00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0171c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x5588a0171c00 session 0x55889f425e00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:06.448935+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b85f1000/0x0/0x1bfc00000, data 0x2f96d6f/0x307d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423080 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 20709376 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:07.449114+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 20709376 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:08.449294+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 20709376 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b85f1000/0x0/0x1bfc00000, data 0x2f96d6f/0x307d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:09.449503+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 20709376 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:10.449771+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 20709376 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:11.450069+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423080 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 20709376 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:12.450286+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107741184 unmapped: 20701184 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:13.450612+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107741184 unmapped: 20701184 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b85f1000/0x0/0x1bfc00000, data 0x2f96d6f/0x307d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:14.450817+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.461376190s of 12.594918251s, submitted: 27
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889d647400 session 0x55889c30f680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107749376 unmapped: 20692992 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1000 session 0x55889d5c9a40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b85f1000/0x0/0x1bfc00000, data 0x2f96d6f/0x307d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:15.450965+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 20676608 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:16.451160+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352060 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 20676608 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:17.451324+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 20676608 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:18.451524+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 20676608 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:19.451707+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 20676608 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:20.451898+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 20676608 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:21.452045+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352060 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 20676608 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:22.452212+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 20676608 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:23.452374+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 20676608 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:24.452586+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 20676608 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:25.452723+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 20676608 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:26.452890+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352060 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 20676608 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:27.453031+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 20676608 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:28.453198+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 20676608 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:29.453378+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107773952 unmapped: 20668416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:30.453517+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107773952 unmapped: 20668416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:31.453730+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352060 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107773952 unmapped: 20668416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:32.453955+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107773952 unmapped: 20668416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:33.454277+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107773952 unmapped: 20668416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:34.454527+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107773952 unmapped: 20668416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:35.454765+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107773952 unmapped: 20668416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:36.454961+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352060 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107773952 unmapped: 20668416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:37.455113+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107773952 unmapped: 20668416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:38.455325+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107773952 unmapped: 20668416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:39.455504+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107773952 unmapped: 20668416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:40.455653+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107773952 unmapped: 20668416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:41.455836+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352060 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107773952 unmapped: 20668416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:42.456106+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107773952 unmapped: 20668416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:43.456383+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107782144 unmapped: 20660224 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:44.456692+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 20652032 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:45.456936+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 20652032 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:46.457136+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352060 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 20652032 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:47.457395+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 20652032 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:48.457550+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 20652032 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:49.457763+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107798528 unmapped: 20643840 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:50.457924+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107798528 unmapped: 20643840 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:51.458131+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352060 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107798528 unmapped: 20643840 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:52.458324+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107798528 unmapped: 20643840 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:24 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:24 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:24.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:53.458518+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 20635648 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:54.458689+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 20635648 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:55.458839+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 20635648 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:56.459016+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352060 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 20635648 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:57.459238+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 20635648 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:58.459391+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 20635648 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:58:59.459646+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 20635648 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:00.459820+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 20635648 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:01.459972+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352060 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 20635648 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:02.460140+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 20635648 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:03.460336+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 20635648 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:04.460635+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 20627456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:05.460872+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 20627456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:06.461081+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352060 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 20627456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:07.461267+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1400 session 0x55889f44f680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f71ac00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 20627456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f71ac00 session 0x55889f6d9680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:08.461539+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107749376 unmapped: 20692992 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:09.461838+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f404400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 55.139923096s of 55.222141266s, submitted: 28
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f404400 session 0x55889f354b40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107757568 unmapped: 20684800 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa5000/0x0/0x1bfc00000, data 0x25e2d6f/0x26c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:10.462103+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107757568 unmapped: 20684800 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:11.462357+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349568 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 20676608 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:12.462672+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889d647400 session 0x55889d5c8b40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1000 session 0x55889f765e00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 17858560 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1400 session 0x55889f424d20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f71ac00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:13.462970+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f71ac00 session 0x55889e998d20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 21020672 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b889d000/0x0/0x1bfc00000, data 0x2cead6f/0x2dd1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:14.463201+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 21020672 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:15.463428+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b889d000/0x0/0x1bfc00000, data 0x2cead6f/0x2dd1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 21020672 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:16.463740+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406502 data_alloc: 234881024 data_used: 19542016
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889e3c0c00 session 0x55889f3d7a40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 21012480 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:17.463977+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 21012480 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:18.464141+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 21012480 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:19.464326+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 21012480 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889d647400 session 0x55889d6f8d20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:20.464505+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.482115746s of 10.672660828s, submitted: 22
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889e3c0c00 session 0x55889e9bab40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 21012480 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:21.464676+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356606 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa5000/0x0/0x1bfc00000, data 0x25e2d6f/0x26c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 21012480 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:22.464856+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 21012480 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:23.465095+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 21012480 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:24.465293+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 21012480 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:25.465510+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa5000/0x0/0x1bfc00000, data 0x25e2d6f/0x26c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 21012480 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:26.465683+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa5000/0x0/0x1bfc00000, data 0x25e2d6f/0x26c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356606 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1000 session 0x55889f44fe00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 20996096 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f6d1400 session 0x55889f7014a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:27.465954+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 20996096 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:28.466104+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 20996096 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:29.466270+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 20996096 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:30.466428+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.918316841s of 10.012021065s, submitted: 33
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 20971520 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:31.467021+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361217 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107487232 unmapped: 20955136 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:32.467189+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107487232 unmapped: 20955136 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:33.467380+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 20946944 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:34.467511+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 20946944 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:35.467667+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 20946944 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:36.467848+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361217 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 20946944 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:37.468057+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 20946944 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:38.468237+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 20946944 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:39.468378+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 20946944 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:40.468605+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 20946944 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:41.468810+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361217 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107503616 unmapped: 20938752 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:42.468976+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107503616 unmapped: 20938752 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:43.469222+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107503616 unmapped: 20938752 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:44.469745+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107503616 unmapped: 20938752 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:45.469909+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107503616 unmapped: 20938752 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:46.470168+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361217 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107503616 unmapped: 20938752 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:47.470433+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107503616 unmapped: 20938752 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:48.470615+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107520000 unmapped: 20922368 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:49.470867+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107520000 unmapped: 20922368 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:50.471009+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107520000 unmapped: 20922368 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:51.471179+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361217 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107520000 unmapped: 20922368 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:52.471334+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107520000 unmapped: 20922368 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:53.471499+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:54.471615+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:55.471762+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:56.471928+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361217 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:57.472080+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:58.472253+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T23:59:59.472525+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:00.472706+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:01.472903+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361217 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:02.473036+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:03.473244+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:04.473401+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:05.473652+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:06.473860+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361217 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:07.474078+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:08.474291+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:09.474498+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:10.474683+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:11.474868+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361217 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:12.475093+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:13.475406+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:14.475644+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:15.475856+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:16.476055+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361217 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:17.476243+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:18.476837+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:19.477015+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:20.477195+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:21.477372+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361217 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:22.477616+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:23.477816+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 20914176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:24.477994+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 20905984 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:25.478161+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 20905984 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:26.478304+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361217 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 20905984 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:27.478546+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 20905984 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:28.478796+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 20905984 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:29.479035+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 20905984 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:30.479251+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 20905984 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:31.479483+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361217 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 20897792 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:32.479685+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 20897792 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:33.479940+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 20889600 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:34.480126+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 20889600 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:35.480297+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 20889600 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:36.480470+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361217 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 20889600 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:37.480681+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:38.480835+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 20889600 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:39.481066+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 20889600 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:40.481252+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 20889600 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa6000/0x0/0x1bfc00000, data 0x25e2d5f/0x26c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:41.481410+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 20889600 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361217 data_alloc: 234881024 data_used: 19537920
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:42.481583+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 20889600 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:43.481780+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 20889600 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f71ac00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 73.214118958s of 73.438522339s, submitted: 76
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 ms_handle_reset con 0x55889f71ac00 session 0x55889f3554a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:44.481938+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 20889600 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:45.482134+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 20889600 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b8fa5000/0x0/0x1bfc00000, data 0x25e2d6f/0x26c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 166 ms_handle_reset con 0x55889d647400 session 0x55889f6a50e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:46.482288+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 20873216 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 166 ms_handle_reset con 0x55889e3c0c00 session 0x55889f705a40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1368043 data_alloc: 234881024 data_used: 19550208
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:47.482454+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 20897792 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 166 handle_osd_map epochs [166,167], i have 166, src has [1,167]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 167 ms_handle_reset con 0x55889f6d1000 session 0x55889f6d9a40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:48.482656+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 20856832 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 167 ms_handle_reset con 0x55889f6d1400 session 0x55889f4341e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f71dc00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 167 heartbeat osd_stat(store_statfs(0x1b8f9e000/0x0/0x1bfc00000, data 0x25e6675/0x26cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 167 ms_handle_reset con 0x55889f71dc00 session 0x55889f6d81e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:49.482928+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 20848640 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:50.483256+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 20848640 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 167 heartbeat osd_stat(store_statfs(0x1b8f9f000/0x0/0x1bfc00000, data 0x25e6665/0x26ce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:51.483487+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 20848640 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 167 heartbeat osd_stat(store_statfs(0x1b8f9f000/0x0/0x1bfc00000, data 0x25e6665/0x26ce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370522 data_alloc: 234881024 data_used: 19554304
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 167 ms_handle_reset con 0x55889d647400 session 0x55889f440f00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:52.483653+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 20979712 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 167 ms_handle_reset con 0x55889e3c0c00 session 0x55889f7045a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:53.483871+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 20905984 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.138070107s of 10.326039314s, submitted: 69
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889f6d1000 session 0x55889f704f00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:54.484038+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889f6d1400 session 0x55889f354960
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107520000 unmapped: 20922368 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:55.484188+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107520000 unmapped: 20922368 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:56.484404+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107520000 unmapped: 20922368 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f708800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889f708800 session 0x55889f44e780
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 heartbeat osd_stat(store_statfs(0x1b8f9c000/0x0/0x1bfc00000, data 0x25e81a4/0x26d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379907 data_alloc: 234881024 data_used: 19554304
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:57.484545+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 107520000 unmapped: 20922368 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889d647400 session 0x55889e609c20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889e3c0c00 session 0x55889f6a5860
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:58.484732+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889f6d1000 session 0x55889f7005a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889f6d1400 session 0x55889e9bba40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 20267008 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:00:59.484884+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 20267008 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f708c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889f708c00 session 0x55889e930000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:00.485033+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 20234240 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:01.485210+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 20234240 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1444397 data_alloc: 234881024 data_used: 19554304
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:02.485363+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 20234240 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 heartbeat osd_stat(store_statfs(0x1b8407000/0x0/0x1bfc00000, data 0x2d6a286/0x2e57000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:03.485655+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 20234240 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 heartbeat osd_stat(store_statfs(0x1b8407000/0x0/0x1bfc00000, data 0x2d6a286/0x2e57000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:04.485821+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 20234240 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.597397804s of 10.915399551s, submitted: 119
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889d647400 session 0x55889f7010e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:05.485995+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889e3c0c00 session 0x55889e51ed20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 20193280 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:06.486206+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 20193280 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889f6d1000 session 0x55889e51e780
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889f6d1400 session 0x55889f3ca000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1440646 data_alloc: 234881024 data_used: 19554304
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:07.486386+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 20185088 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 heartbeat osd_stat(store_statfs(0x1b840b000/0x0/0x1bfc00000, data 0x2d6a1a4/0x2e53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889c4b0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889c4b0c00 session 0x55889c526b40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889d647400 session 0x55889bcc25a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:08.486552+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 20152320 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 heartbeat osd_stat(store_statfs(0x1b840b000/0x0/0x1bfc00000, data 0x2d6a1a4/0x2e53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 heartbeat osd_stat(store_statfs(0x1b840b000/0x0/0x1bfc00000, data 0x2d6a1a4/0x2e53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:09.486810+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 20152320 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:10.487021+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 20152320 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:11.487229+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 20152320 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1440077 data_alloc: 234881024 data_used: 19554304
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:12.487447+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 20152320 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 heartbeat osd_stat(store_statfs(0x1b840b000/0x0/0x1bfc00000, data 0x2d6a1a4/0x2e53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:13.487673+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108298240 unmapped: 20144128 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:14.487858+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108298240 unmapped: 20144128 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:15.488050+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 20135936 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:16.488262+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 20135936 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1440077 data_alloc: 234881024 data_used: 19554304
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 heartbeat osd_stat(store_statfs(0x1b840b000/0x0/0x1bfc00000, data 0x2d6a1a4/0x2e53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:17.488486+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 20135936 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:18.488682+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 20135936 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:19.488884+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 20135936 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:20.489102+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 20135936 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:21.489330+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 20127744 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 heartbeat osd_stat(store_statfs(0x1b840b000/0x0/0x1bfc00000, data 0x2d6a1a4/0x2e53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889e3c0c00 session 0x55889e319e00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437837 data_alloc: 234881024 data_used: 19554304
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:22.489540+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889f6d1000 session 0x55889f048f00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 20185088 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:23.489828+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 20185088 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:24.490012+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.027177811s of 19.245706558s, submitted: 63
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889f6d1400 session 0x55889f3aef00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d029c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 20160512 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889d029c00 session 0x55889f3c2f00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:25.490185+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 20160512 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:26.490387+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 20152320 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889d647400 session 0x55889f7041e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437585 data_alloc: 234881024 data_used: 19554304
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 heartbeat osd_stat(store_statfs(0x1b840a000/0x0/0x1bfc00000, data 0x2d6a1b4/0x2e54000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [0,1,0,1])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:27.490649+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889e3c0c00 session 0x55889f6a4d20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108445696 unmapped: 19996672 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889f6d1000 session 0x55889f434000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889f6d1400 session 0x55889f4185a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:28.490811+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x5588a0de6000 session 0x55889f3c43c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 19169280 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:29.491025+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 19169280 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:30.491188+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 19169280 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:31.491408+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 19169280 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 heartbeat osd_stat(store_statfs(0x1b8019000/0x0/0x1bfc00000, data 0x315b1b4/0x3245000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481726 data_alloc: 234881024 data_used: 19554304
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:32.491677+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 19161088 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:33.491943+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 19161088 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:34.492145+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 19152896 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:35.492314+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 19152896 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 heartbeat osd_stat(store_statfs(0x1b8019000/0x0/0x1bfc00000, data 0x315b1b4/0x3245000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:36.492547+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 19152896 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.401134491s of 12.696738243s, submitted: 125
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 heartbeat osd_stat(store_statfs(0x1b8019000/0x0/0x1bfc00000, data 0x315b1b4/0x3245000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481726 data_alloc: 234881024 data_used: 19554304
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889d647400 session 0x55889e316b40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:37.492795+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 19742720 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:38.492927+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108912640 unmapped: 19529728 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 ms_handle_reset con 0x55889e3c0c00 session 0x55889f424b40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:39.493074+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 19480576 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:40.493269+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 19480576 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 heartbeat osd_stat(store_statfs(0x1b840b000/0x0/0x1bfc00000, data 0x2d6a1a4/0x2e53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:41.493430+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 19480576 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889f6d1000 session 0x55889f049e00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1447432 data_alloc: 234881024 data_used: 19562496
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:42.493592+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 19456000 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x5588a0de6400 session 0x55889f432d20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:43.493802+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x5588a0de6800 session 0x55889f419e00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 19365888 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x5588a0de6800 session 0x55889ecdfa40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b8b89000/0x0/0x1bfc00000, data 0x25e9e5f/0x26d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [0,0,0,0,0,0,3])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889d647400 session 0x55889f3d6f00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:44.493977+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889f6d1400 session 0x55889f425860
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889e3c0c00 session 0x55889ecde960
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 19193856 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889f6d1000 session 0x55889f434960
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:45.494157+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 19193856 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b8b85000/0x0/0x1bfc00000, data 0x25e9f43/0x26d9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:46.494319+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 19193856 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1404312 data_alloc: 234881024 data_used: 19562496
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:47.494438+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 19193856 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b8b85000/0x0/0x1bfc00000, data 0x25e9f43/0x26d9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:48.494654+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 19193856 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:49.494988+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 19185664 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b8b85000/0x0/0x1bfc00000, data 0x25e9f43/0x26d9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:50.495156+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 19185664 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:51.495329+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b8b85000/0x0/0x1bfc00000, data 0x25e9f43/0x26d9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 19185664 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.205874443s of 15.069627762s, submitted: 223
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1404312 data_alloc: 234881024 data_used: 19562496
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:52.495520+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 19185664 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889d647400 session 0x55889e71d680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:53.495758+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 19193856 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:54.495947+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 19193856 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889e3c0c00 session 0x55889f3c2d20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:55.496158+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 19193856 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b8b84000/0x0/0x1bfc00000, data 0x25e9fa5/0x26da000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:56.496339+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 19193856 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x5588a0de6800 session 0x55889f3c5e00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405038 data_alloc: 234881024 data_used: 19562496
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:57.496462+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109264896 unmapped: 19177472 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889f6d1400 session 0x55889f701860
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b8b84000/0x0/0x1bfc00000, data 0x25e9fa5/0x26da000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [0,0,0,0,0,1,0,0,0,1])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:58.496588+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 19161088 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:01:59.496742+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 12533760 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x5588a0de6c00 session 0x55889c9983c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b855e000/0x0/0x1bfc00000, data 0x2c0ffa5/0x2d00000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x5588a0de6400 session 0x55889f355e00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:00.497704+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889d647400 session 0x55889f4243c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109600768 unmapped: 18841600 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889e3c0c00 session 0x55889f441680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:01.497838+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889f6d1400 session 0x55889f3c3c20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 18227200 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de7000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x5588a0de7000 session 0x55889f3c4b40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x5588a0de6800 session 0x55889e31b2c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553604 data_alloc: 234881024 data_used: 19562496
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:02.498003+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de7000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.850624561s of 10.446049690s, submitted: 93
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 18030592 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x5588a0de7000 session 0x55889f435c20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:03.498203+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110419968 unmapped: 18022400 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:04.498349+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 18014208 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:05.498605+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b7986000/0x0/0x1bfc00000, data 0x37e7fa5/0x38d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 18014208 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b7986000/0x0/0x1bfc00000, data 0x37e7fa5/0x38d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:06.498817+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 18014208 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:07.498967+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553620 data_alloc: 234881024 data_used: 19562496
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 18006016 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:08.499148+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 18006016 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:09.499342+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 18006016 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:10.499474+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 18006016 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:11.499637+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 18006016 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b7986000/0x0/0x1bfc00000, data 0x37e7fa5/0x38d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:12.499837+0000)
Jan 22 00:20:24 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18231 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553620 data_alloc: 234881024 data_used: 19562496
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 17997824 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:13.500092+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 17997824 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:14.500254+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 17997824 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:15.500445+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b7986000/0x0/0x1bfc00000, data 0x37e7fa5/0x38d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 17997824 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:16.500647+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 17997824 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:17.500891+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553620 data_alloc: 234881024 data_used: 19562496
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.707829475s of 14.846472740s, submitted: 3
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 17997824 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:18.501126+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110485504 unmapped: 17956864 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b7986000/0x0/0x1bfc00000, data 0x37e7fa5/0x38d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,2])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:19.501327+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 17940480 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889d647400 session 0x55889e9983c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:20.501496+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 17932288 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889e3c0c00 session 0x55889f6d8780
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:21.501687+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110518272 unmapped: 17924096 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:22.501864+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465687 data_alloc: 234881024 data_used: 19562496
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 17891328 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:23.502142+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889f6d1400 session 0x55889d5c85a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 17891328 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b855f000/0x0/0x1bfc00000, data 0x2c0ff43/0x2cff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:24.502363+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889d647400 session 0x55889f4343c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 17874944 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b855f000/0x0/0x1bfc00000, data 0x2c0ff43/0x2cff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:25.502611+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 17874944 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:26.502812+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110575616 unmapped: 17866752 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:27.502985+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1419857 data_alloc: 234881024 data_used: 19562496
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.438516617s of 10.133496284s, submitted: 71
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110575616 unmapped: 17866752 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889e3c0c00 session 0x55889d0834a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889f6d1400 session 0x55889f374000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:28.503118+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110575616 unmapped: 17866752 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:29.503272+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 17858560 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:30.503468+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b8b02000/0x0/0x1bfc00000, data 0x25e9fa6/0x26da000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 17858560 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:31.503654+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b8b84000/0x0/0x1bfc00000, data 0x25e9fa6/0x26da000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 17858560 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x5588a0de6800 session 0x55889f7003c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:32.503799+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1419825 data_alloc: 234881024 data_used: 19566592
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 17858560 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de7000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:33.503976+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x5588a0de7000 session 0x55889f6a4780
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110583808 unmapped: 17858560 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:34.504377+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 17842176 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889d647400 session 0x55889c92f4a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:35.504609+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 17825792 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889e3c0c00 session 0x55889f3c21e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:36.504776+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b8b86000/0x0/0x1bfc00000, data 0x25e9f34/0x26d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 17801216 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b8b86000/0x0/0x1bfc00000, data 0x25e9f34/0x26d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:37.504933+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420293 data_alloc: 234881024 data_used: 19566592
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x5588a0de6800 session 0x55889f44ef00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889f6d1400 session 0x55889d5c94a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 17776640 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:38.505113+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de7000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.178460121s of 10.884570122s, submitted: 36
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x5588a0de7000 session 0x55889f6d85a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 17776640 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889e3c0c00 session 0x55889e51f2c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b8b86000/0x0/0x1bfc00000, data 0x25e9f34/0x26d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [0,0,0,0,0,1])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:39.505261+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b8b86000/0x0/0x1bfc00000, data 0x25e9f34/0x26d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [0,0,0,0,0,0,0,1,0,1])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110690304 unmapped: 17752064 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:40.505455+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110690304 unmapped: 17752064 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:41.505645+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 17735680 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:42.505779+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889d647400 session 0x55889eb3d860
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420358 data_alloc: 234881024 data_used: 19570688
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 17711104 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:43.505949+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x55889f6d1400 session 0x55889e9a2f00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 17711104 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b8b89000/0x0/0x1bfc00000, data 0x25e9e60/0x26d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x5588a0de6800 session 0x55889f048d20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:44.506075+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110739456 unmapped: 17702912 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:45.506202+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 ms_handle_reset con 0x5588a0de6400 session 0x55889d2d85a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110739456 unmapped: 17702912 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:46.506368+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110739456 unmapped: 17702912 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:47.506600+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418599 data_alloc: 234881024 data_used: 19566592
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110747648 unmapped: 17694720 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b8b89000/0x0/0x1bfc00000, data 0x25e9e60/0x26d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:48.506731+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b8b89000/0x0/0x1bfc00000, data 0x25e9e60/0x26d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110747648 unmapped: 17694720 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.802462101s of 10.618214607s, submitted: 67
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 169 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:49.506891+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 17637376 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:50.507082+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 170 ms_handle_reset con 0x55889d647400 session 0x55889f3c3e00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 170 heartbeat osd_stat(store_statfs(0x1b8b85000/0x0/0x1bfc00000, data 0x25ebb0d/0x26d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 17620992 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:51.507237+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 17620992 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:52.507393+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1422773 data_alloc: 234881024 data_used: 19574784
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 17620992 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:53.507571+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 17620992 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:54.507826+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 17620992 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:55.508009+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 17620992 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 170 heartbeat osd_stat(store_statfs(0x1b8b85000/0x0/0x1bfc00000, data 0x25ebb0d/0x26d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:56.508196+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 17620992 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:57.508344+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1422773 data_alloc: 234881024 data_used: 19574784
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 17620992 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:58.508479+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 17620992 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.926611900s of 10.066731453s, submitted: 16
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:02:59.508600+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 171 heartbeat osd_stat(store_statfs(0x1b8b81000/0x0/0x1bfc00000, data 0x25ed64c/0x26db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 17596416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:00.508751+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 17596416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:01.508900+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 171 heartbeat osd_stat(store_statfs(0x1b8b82000/0x0/0x1bfc00000, data 0x25ed64c/0x26db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 17596416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:02.509069+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425747 data_alloc: 234881024 data_used: 19574784
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 17596416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:03.509301+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 17596416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:04.509489+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 17596416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:05.509658+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 17596416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 171 heartbeat osd_stat(store_statfs(0x1b8b82000/0x0/0x1bfc00000, data 0x25ed64c/0x26db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:06.509801+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 171 heartbeat osd_stat(store_statfs(0x1b8b82000/0x0/0x1bfc00000, data 0x25ed64c/0x26db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 17596416 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:07.509949+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425747 data_alloc: 234881024 data_used: 19574784
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 171 heartbeat osd_stat(store_statfs(0x1b8b82000/0x0/0x1bfc00000, data 0x25ed64c/0x26db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 17588224 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:08.510179+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 17588224 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:09.510347+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 171 heartbeat osd_stat(store_statfs(0x1b8b82000/0x0/0x1bfc00000, data 0x25ed64c/0x26db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 17588224 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:10.510525+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 17588224 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:11.510657+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 17588224 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:12.510833+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425747 data_alloc: 234881024 data_used: 19574784
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 171 heartbeat osd_stat(store_statfs(0x1b8b82000/0x0/0x1bfc00000, data 0x25ed64c/0x26db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 17588224 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:13.511096+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 17588224 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:14.511319+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 171 heartbeat osd_stat(store_statfs(0x1b8b82000/0x0/0x1bfc00000, data 0x25ed64c/0x26db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 17588224 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 171 heartbeat osd_stat(store_statfs(0x1b8b82000/0x0/0x1bfc00000, data 0x25ed64c/0x26db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:15.511514+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 17588224 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:16.511685+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 17571840 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:17.511858+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425747 data_alloc: 234881024 data_used: 19574784
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 17571840 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:18.512037+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 17571840 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:19.512207+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 171 heartbeat osd_stat(store_statfs(0x1b8b82000/0x0/0x1bfc00000, data 0x25ed64c/0x26db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 17571840 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:20.512398+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 17563648 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:21.512691+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 17563648 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:22.512913+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425747 data_alloc: 234881024 data_used: 19574784
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 17563648 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:23.513114+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 17563648 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:24.513267+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 17555456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:25.513445+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 171 heartbeat osd_stat(store_statfs(0x1b8b82000/0x0/0x1bfc00000, data 0x25ed64c/0x26db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 17555456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:26.513632+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 17555456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:27.513817+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425747 data_alloc: 234881024 data_used: 19574784
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 171 heartbeat osd_stat(store_statfs(0x1b8b82000/0x0/0x1bfc00000, data 0x25ed64c/0x26db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 17555456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:28.513952+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 17555456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:29.514111+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 17555456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:30.514242+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 171 heartbeat osd_stat(store_statfs(0x1b8b82000/0x0/0x1bfc00000, data 0x25ed64c/0x26db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 17555456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:31.514447+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 17555456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:32.514616+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425747 data_alloc: 234881024 data_used: 19574784
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 17555456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:33.514827+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 17555456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:34.514996+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 17555456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:35.515158+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 171 heartbeat osd_stat(store_statfs(0x1b8b82000/0x0/0x1bfc00000, data 0x25ed64c/0x26db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 17555456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:36.515376+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 171 heartbeat osd_stat(store_statfs(0x1b8b82000/0x0/0x1bfc00000, data 0x25ed64c/0x26db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 17555456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:37.515663+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425747 data_alloc: 234881024 data_used: 19574784
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 17555456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:38.515825+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 17555456 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:39.516012+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.878131866s of 40.351966858s, submitted: 14
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 17539072 heap: 128442368 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 171 heartbeat osd_stat(store_statfs(0x1b8b82000/0x0/0x1bfc00000, data 0x25ed64c/0x26db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:40.516155+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 172 ms_handle_reset con 0x55889e3c0c00 session 0x55889c83ef00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 25927680 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:41.516293+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 25927680 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:42.516452+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1485386 data_alloc: 234881024 data_used: 19582976
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 172 heartbeat osd_stat(store_statfs(0x1b837f000/0x0/0x1bfc00000, data 0x2def2a5/0x2ede000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 25919488 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:43.516697+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 25919488 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 172 heartbeat osd_stat(store_statfs(0x1b837f000/0x0/0x1bfc00000, data 0x2def2a5/0x2ede000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:44.516883+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 172 heartbeat osd_stat(store_statfs(0x1b837f000/0x0/0x1bfc00000, data 0x2def2a5/0x2ede000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 25919488 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-mon[74318]: from='client.27916 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:24 compute-0 ceph-mon[74318]: from='client.18162 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:24 compute-0 ceph-mon[74318]: from='client.27989 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:24 compute-0 ceph-mon[74318]: from='client.18186 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:24 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3899751111' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 00:20:24 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/76491048' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 22 00:20:24 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2913849269' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 00:20:24 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1341153239' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 00:20:24 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/337379458' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 00:20:24 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3832134333' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 22 00:20:24 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/956971882' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 22 00:20:24 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3222352367' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:45.517154+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 25919488 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:46.517314+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 173 heartbeat osd_stat(store_statfs(0x1b837f000/0x0/0x1bfc00000, data 0x2def2a5/0x2ede000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [0,0,0,1])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 25878528 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 173 ms_handle_reset con 0x55889f6d1400 session 0x55889e9a2b40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:47.517437+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438702 data_alloc: 234881024 data_used: 19591168
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 173 heartbeat osd_stat(store_statfs(0x1b8b7d000/0x0/0x1bfc00000, data 0x25f0f52/0x26e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110108672 unmapped: 26730496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:48.517622+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 173 handle_osd_map epochs [173,174], i have 173, src has [1,174]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 174 ms_handle_reset con 0x5588a0de6800 session 0x55889f704000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110141440 unmapped: 26697728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:49.517850+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110141440 unmapped: 26697728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:50.518051+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110141440 unmapped: 26697728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:51.518230+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110149632 unmapped: 26689536 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:52.518348+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1499738 data_alloc: 234881024 data_used: 19599360
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 110149632 unmapped: 26689536 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 174 heartbeat osd_stat(store_statfs(0x1b8376000/0x0/0x1bfc00000, data 0x2df2c1d/0x2ee7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:53.518531+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 174 heartbeat osd_stat(store_statfs(0x1b8376000/0x0/0x1bfc00000, data 0x2df2c1d/0x2ee7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 174 handle_osd_map epochs [175,175], i have 175, src has [1,175]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.990249634s of 14.311942101s, submitted: 76
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 26853376 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:54.518696+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de7400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 26869760 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:55.518882+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 175 handle_osd_map epochs [175,176], i have 175, src has [1,176]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 176 ms_handle_reset con 0x5588a0de7400 session 0x55889f4345a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111042560 unmapped: 25796608 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:56.519108+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 25763840 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:57.519280+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1449555 data_alloc: 234881024 data_used: 19599360
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 25763840 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:58.519451+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 176 heartbeat osd_stat(store_statfs(0x1b8371000/0x0/0x1bfc00000, data 0x25f63b3/0x26ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 25763840 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:03:59.519632+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 176 heartbeat osd_stat(store_statfs(0x1b8371000/0x0/0x1bfc00000, data 0x25f63b3/0x26ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 25763840 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:00.519793+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 25763840 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:01.519964+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 25763840 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:02.520162+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1449555 data_alloc: 234881024 data_used: 19599360
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 25763840 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:03.520428+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 176 handle_osd_map epochs [177,177], i have 176, src has [1,177]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 25706496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b70000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:04.520686+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 25706496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:05.520887+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 25706496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:06.521082+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 25706496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:07.521244+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451665 data_alloc: 234881024 data_used: 19599360
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 25706496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:08.521400+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b70000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 25706496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:09.521671+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b70000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 25706496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:10.521848+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 25706496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:11.522139+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 25706496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:12.522315+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451665 data_alloc: 234881024 data_used: 19599360
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 25706496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:13.522506+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 25706496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:14.522718+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b70000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 25706496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:15.522858+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 25706496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:16.522977+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 25706496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:17.523117+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451665 data_alloc: 234881024 data_used: 19599360
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 25706496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:18.523255+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 25706496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:19.523405+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 25698304 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:20.523609+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b70000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 25690112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:21.523750+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 25690112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:22.523896+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451665 data_alloc: 234881024 data_used: 19599360
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 25690112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:23.524090+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b70000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 25690112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:24.524259+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 25690112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:25.524409+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 25690112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:26.524627+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 25690112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:27.524853+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451665 data_alloc: 234881024 data_used: 19599360
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 25690112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:28.525039+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 25690112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:29.525429+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b70000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 25681920 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:30.525611+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 25673728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:31.525759+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b70000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 25673728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:32.525912+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451665 data_alloc: 234881024 data_used: 19599360
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 25673728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:33.526137+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b70000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 25673728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:34.526322+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 25673728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:35.526505+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 25673728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:36.526701+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b70000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 25673728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:37.526861+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451665 data_alloc: 234881024 data_used: 19599360
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 25673728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:38.527106+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 11K writes, 37K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 11K writes, 3448 syncs, 3.34 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3122 writes, 7194 keys, 3122 commit groups, 1.0 writes per commit group, ingest: 2.81 MB, 0.00 MB/s
                                           Interval WAL: 3122 writes, 1392 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 25673728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:39.527225+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b70000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 25673728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:40.527364+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 ms_handle_reset con 0x55889d647400 session 0x55889f700960
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 ms_handle_reset con 0x55889e3c0c00 session 0x55889e71d0e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 25395200 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:41.527487+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 25395200 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.200469971s of 48.557357788s, submitted: 85
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:42.527919+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 ms_handle_reset con 0x55889f6d1400 session 0x55889f440000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b70000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456579 data_alloc: 234881024 data_used: 19599360
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111419392 unmapped: 25419776 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:43.528414+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111419392 unmapped: 25419776 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:44.528586+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 ms_handle_reset con 0x5588a0de6800 session 0x55889e71cd20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de7800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 ms_handle_reset con 0x5588a0de7800 session 0x55889e930960
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 111419392 unmapped: 25419776 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:45.528757+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 ms_handle_reset con 0x55889d647400 session 0x55889e71d860
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b71000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 24682496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:46.528902+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 24682496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:47.529049+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1544816 data_alloc: 234881024 data_used: 19599360
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 24682496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:48.529186+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 24674304 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:49.529363+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 24051712 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:50.529628+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b7ff0000/0x0/0x1bfc00000, data 0x313aef2/0x3230000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 ms_handle_reset con 0x55889e3c0c00 session 0x55889f3ca5a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:51.529799+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:52.530049+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464186 data_alloc: 234881024 data_used: 19656704
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:53.530301+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:54.530480+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:55.530718+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b71000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:56.530910+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:57.531053+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464186 data_alloc: 234881024 data_used: 19656704
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:58.531247+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:04:59.531532+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b71000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:00.531760+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b71000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:01.531925+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:02.532126+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464186 data_alloc: 234881024 data_used: 19656704
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:03.532331+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b71000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:04.532647+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:05.532837+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:06.533002+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:07.533174+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464186 data_alloc: 234881024 data_used: 19656704
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:08.533357+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b71000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:09.533656+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:10.533960+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:11.534107+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 24256512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:12.534265+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 24248320 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464186 data_alloc: 234881024 data_used: 19656704
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:13.534492+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 24248320 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b71000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:14.534669+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 24240128 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:15.534866+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 24240128 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:16.535236+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 24240128 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:17.535409+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 24240128 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464186 data_alloc: 234881024 data_used: 19656704
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:18.535648+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 24240128 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b71000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:19.535906+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 24240128 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:20.536091+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 24240128 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b71000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:21.536260+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 24240128 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:22.536471+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 24240128 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464186 data_alloc: 234881024 data_used: 19656704
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 ms_handle_reset con 0x55889f6d1400 session 0x55889ecde5a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:23.536685+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 23977984 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 ms_handle_reset con 0x5588a0de6800 session 0x55889f3af2c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:24.536870+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 23904256 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b71000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de7c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.402942657s of 42.624431610s, submitted: 65
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 ms_handle_reset con 0x5588a0de7c00 session 0x55889f375e00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:25.537048+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 23896064 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:26.537250+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 23896064 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 ms_handle_reset con 0x55889d647400 session 0x55889eb3c960
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:27.537407+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889e3c0c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 23896064 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 ms_handle_reset con 0x55889e3c0c00 session 0x55889f355c20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8b71000/0x0/0x1bfc00000, data 0x25f7ef2/0x26ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 ms_handle_reset con 0x55889f6d1400 session 0x55889e9ba960
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530491 data_alloc: 234881024 data_used: 19656704
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 ms_handle_reset con 0x5588a0de6800 session 0x55889f435a40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:28.537615+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 23773184 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8308000/0x0/0x1bfc00000, data 0x2e60ef2/0x2f56000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:29.537741+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 23773184 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:30.537921+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 23773184 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 heartbeat osd_stat(store_statfs(0x1b8308000/0x0/0x1bfc00000, data 0x2e60ef2/0x2f56000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f708800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:31.538107+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f708400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 23773184 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 177 handle_osd_map epochs [178,178], i have 177, src has [1,178]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 178 heartbeat osd_stat(store_statfs(0x1b8308000/0x0/0x1bfc00000, data 0x2e60ef2/0x2f56000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:32.538286+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 23928832 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 179 ms_handle_reset con 0x55889f708400 session 0x55889be96d20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538839 data_alloc: 234881024 data_used: 19668992
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:33.538480+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 112918528 unmapped: 23920640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f708000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f709c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:34.538665+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 23379968 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 180 ms_handle_reset con 0x55889f708000 session 0x55889f4330e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 180 ms_handle_reset con 0x55889f709c00 session 0x55889c30e780
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 180 handle_osd_map epochs [181,181], i have 180, src has [1,181]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.680815697s of 10.128460884s, submitted: 80
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 181 ms_handle_reset con 0x55889f708800 session 0x55889f425a40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:35.538871+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 23265280 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:36.539035+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 23232512 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f708c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 181 ms_handle_reset con 0x55889f708c00 session 0x55889f4252c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 181 heartbeat osd_stat(store_statfs(0x1b7657000/0x0/0x1bfc00000, data 0x3b090f5/0x3c04000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:37.539195+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 113623040 unmapped: 23216128 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1644436 data_alloc: 234881024 data_used: 19668992
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 181 ms_handle_reset con 0x55889d647400 session 0x55889c681e00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:38.539348+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 23199744 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f708000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 182 ms_handle_reset con 0x55889f708000 session 0x55889ecde3c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:39.539542+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 22749184 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _renew_subs
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 183 ms_handle_reset con 0x55889f6d0400 session 0x55889e539680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 183 ms_handle_reset con 0x55889f6d0800 session 0x55889f049860
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 183 ms_handle_reset con 0x55889d647400 session 0x55889f34f2c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:40.539750+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114081792 unmapped: 22757376 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b6d58000/0x0/0x1bfc00000, data 0x44068ef/0x4505000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:41.539975+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114081792 unmapped: 22757376 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:42.540188+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114081792 unmapped: 22757376 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1723332 data_alloc: 234881024 data_used: 19681280
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:43.540495+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114081792 unmapped: 22757376 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:44.540694+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 22749184 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:45.540971+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 22749184 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:46.541224+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b6d58000/0x0/0x1bfc00000, data 0x44068ef/0x4505000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 22740992 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:47.541473+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 22740992 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1723332 data_alloc: 234881024 data_used: 19681280
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:48.541693+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 22740992 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:49.541932+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 22740992 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:50.542187+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 22740992 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.852025986s of 15.971266747s, submitted: 46
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de7000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:51.542354+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 22732800 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 183 handle_osd_map epochs [183,184], i have 183, src has [1,184]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 184 ms_handle_reset con 0x55889f6d1400 session 0x55889f45d4a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 184 ms_handle_reset con 0x5588a0de7000 session 0x55889d2d8000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 184 heartbeat osd_stat(store_statfs(0x1b6d56000/0x0/0x1bfc00000, data 0x44086e0/0x4507000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 184 ms_handle_reset con 0x5588a0de6000 session 0x55889f354000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:52.542624+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 184 ms_handle_reset con 0x55889f6d0400 session 0x55889c9981e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 184 ms_handle_reset con 0x55889d647400 session 0x55889c30ed20
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503494 data_alloc: 234881024 data_used: 19689472
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:53.542917+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 184 heartbeat osd_stat(store_statfs(0x1b8b5a000/0x0/0x1bfc00000, data 0x26046e0/0x2703000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:54.543105+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:55.543216+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:56.543447+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:57.543690+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503494 data_alloc: 234881024 data_used: 19689472
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:58.543856+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 184 handle_osd_map epochs [184,185], i have 184, src has [1,185]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8b57000/0x0/0x1bfc00000, data 0x2606257/0x2706000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:05:59.544011+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:00.544196+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:01.544397+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8b57000/0x0/0x1bfc00000, data 0x2606257/0x2706000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:02.544658+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1506468 data_alloc: 234881024 data_used: 19689472
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:03.544894+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:04.545121+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:05.545307+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:06.545495+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8b57000/0x0/0x1bfc00000, data 0x2606257/0x2706000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:07.545673+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1506468 data_alloc: 234881024 data_used: 19689472
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:08.545869+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8b57000/0x0/0x1bfc00000, data 0x2606257/0x2706000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:09.546098+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:10.546376+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:11.546646+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8b57000/0x0/0x1bfc00000, data 0x2606257/0x2706000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:12.546920+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8b57000/0x0/0x1bfc00000, data 0x2606257/0x2706000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1506468 data_alloc: 234881024 data_used: 19689472
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:13.547220+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:14.547409+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:15.547651+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 21659648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:16.547856+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115187712 unmapped: 21651456 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:17.548049+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.251956940s of 26.501426697s, submitted: 101
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115187712 unmapped: 21651456 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1505660 data_alloc: 234881024 data_used: 19689472
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:18.548279+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8b58000/0x0/0x1bfc00000, data 0x2606257/0x2706000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 21553152 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8b58000/0x0/0x1bfc00000, data 0x2606257/0x2706000, compress 0x0/0x0/0x0, omap 0x639, meta 0x499f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:19.548524+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 21413888 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:20.548923+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:21.549183+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:22.549422+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1505588 data_alloc: 234881024 data_used: 19689472
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:23.549680+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8748000/0x0/0x1bfc00000, data 0x2606257/0x2706000, compress 0x0/0x0/0x0, omap 0x639, meta 0x4daf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:24.549875+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8748000/0x0/0x1bfc00000, data 0x2606257/0x2706000, compress 0x0/0x0/0x0, omap 0x639, meta 0x4daf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:25.550069+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:26.550276+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:27.550465+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1505588 data_alloc: 234881024 data_used: 19689472
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:28.550649+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:29.550819+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8748000/0x0/0x1bfc00000, data 0x2606257/0x2706000, compress 0x0/0x0/0x0, omap 0x639, meta 0x4daf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:30.551003+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:31.551172+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:32.551400+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8748000/0x0/0x1bfc00000, data 0x2606257/0x2706000, compress 0x0/0x0/0x0, omap 0x639, meta 0x4daf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1505588 data_alloc: 234881024 data_used: 19689472
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:33.551630+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:34.551818+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:35.552023+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:36.552215+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:37.552459+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8748000/0x0/0x1bfc00000, data 0x2606257/0x2706000, compress 0x0/0x0/0x0, omap 0x639, meta 0x4daf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1505588 data_alloc: 234881024 data_used: 19689472
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:38.552643+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:39.552908+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:40.553067+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 21872640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.662223816s of 23.446857452s, submitted: 255
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8748000/0x0/0x1bfc00000, data 0x2606257/0x2706000, compress 0x0/0x0/0x0, omap 0x639, meta 0x4daf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:41.553242+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 21864448 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:42.553417+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 ms_handle_reset con 0x55889f6d0800 session 0x55889e520960
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 21864448 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8746000/0x0/0x1bfc00000, data 0x26062ca/0x2708000, compress 0x0/0x0/0x0, omap 0x639, meta 0x4daf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:43.553654+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1509245 data_alloc: 234881024 data_used: 19689472
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 21864448 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:44.553865+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8746000/0x0/0x1bfc00000, data 0x26062ca/0x2708000, compress 0x0/0x0/0x0, omap 0x639, meta 0x4daf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 21856256 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:45.554029+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 21856256 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:46.554198+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 21856256 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:47.554391+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8746000/0x0/0x1bfc00000, data 0x26062ca/0x2708000, compress 0x0/0x0/0x0, omap 0x639, meta 0x4daf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 21856256 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:48.554665+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1509245 data_alloc: 234881024 data_used: 19689472
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 21856256 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 ms_handle_reset con 0x55889d647400 session 0x55889be974a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8746000/0x0/0x1bfc00000, data 0x26062ca/0x2708000, compress 0x0/0x0/0x0, omap 0x639, meta 0x4daf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:49.554838+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 ms_handle_reset con 0x55889f6d0400 session 0x55889e31af00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 21839872 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:50.555013+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 21839872 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:51.555256+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 21839872 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:52.555456+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 21839872 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8747000/0x0/0x1bfc00000, data 0x2606257/0x2706000, compress 0x0/0x0/0x0, omap 0x639, meta 0x4daf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:53.555742+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1507605 data_alloc: 234881024 data_used: 19689472
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 ms_handle_reset con 0x55889f6d0800 session 0x55889f4401e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 21839872 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 ms_handle_reset con 0x5588a0de6000 session 0x55889f6d83c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:54.555940+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115073024 unmapped: 21766144 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:55.556129+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de7000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.677923203s of 14.754777908s, submitted: 21
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115073024 unmapped: 21766144 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 ms_handle_reset con 0x5588a0de7000 session 0x55889f6d94a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:56.556338+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115073024 unmapped: 21766144 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:57.556673+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b8747000/0x0/0x1bfc00000, data 0x2606257/0x2706000, compress 0x0/0x0/0x0, omap 0x639, meta 0x4daf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115073024 unmapped: 21766144 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:58.556841+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511925 data_alloc: 234881024 data_used: 19750912
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 21757952 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 ms_handle_reset con 0x55889d647400 session 0x55889f3d6000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 ms_handle_reset con 0x55889f6d0400 session 0x55889f3ca780
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:06:59.557011+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 ms_handle_reset con 0x55889f6d0800 session 0x55889d058b40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 21585920 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:00.557147+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 21585920 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:01.557292+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b7cc3000/0x0/0x1bfc00000, data 0x308b257/0x318b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x4daf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 21725184 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 185 handle_osd_map epochs [185,186], i have 185, src has [1,186]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 186 ms_handle_reset con 0x5588a0de6000 session 0x55889f4241e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b7cc3000/0x0/0x1bfc00000, data 0x308b257/0x318b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x4daf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:02.557552+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 21708800 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:03.557739+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1600690 data_alloc: 234881024 data_used: 19759104
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 21708800 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:04.557902+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 21708800 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:05.558055+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 21708800 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:06.558214+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 21700608 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b7cbf000/0x0/0x1bfc00000, data 0x308ceb0/0x318e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x4daf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:07.558368+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 21700608 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:08.558522+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1600690 data_alloc: 234881024 data_used: 19759104
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 21700608 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:09.558780+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d1400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.635437965s of 13.870010376s, submitted: 38
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 186 ms_handle_reset con 0x55889f6d1400 session 0x55889e3163c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 21692416 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 186 handle_osd_map epochs [186,187], i have 186, src has [1,187]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 187 ms_handle_reset con 0x55889d647400 session 0x55889ecdf0e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:10.558946+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 21692416 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 187 ms_handle_reset con 0x55889f6d0400 session 0x55889f3c3680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:11.559087+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 21708800 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 187 ms_handle_reset con 0x55889f6d0800 session 0x55889f3d7e00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 187 heartbeat osd_stat(store_statfs(0x1b7cbc000/0x0/0x1bfc00000, data 0x308eb5d/0x3191000, compress 0x0/0x0/0x0, omap 0x639, meta 0x4daf9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:12.559329+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 187 heartbeat osd_stat(store_statfs(0x1b9762000/0x0/0x1bfc00000, data 0x2609b5d/0x270c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 21708800 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:13.559770+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1525114 data_alloc: 234881024 data_used: 19767296
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 21708800 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:14.560006+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 21708800 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:15.560165+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 187 heartbeat osd_stat(store_statfs(0x1b9762000/0x0/0x1bfc00000, data 0x2609b5d/0x270c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 187 ms_handle_reset con 0x5588a0de6000 session 0x55889f711680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f708000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 21577728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 187 ms_handle_reset con 0x55889f708000 session 0x55889c83e780
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:16.560387+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115466240 unmapped: 21372928 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:17.560531+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 187 ms_handle_reset con 0x55889d647400 session 0x55889c30e3c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 21602304 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:18.560860+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 187 handle_osd_map epochs [187,188], i have 187, src has [1,188]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1526602 data_alloc: 234881024 data_used: 19775488
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 21602304 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9762000/0x0/0x1bfc00000, data 0x2609b5d/0x270c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:19.561017+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 21602304 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:20.561210+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.564072609s of 10.947838783s, submitted: 63
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 ms_handle_reset con 0x55889f6d0400 session 0x55889f3d7680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 21602304 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 ms_handle_reset con 0x55889f6d0800 session 0x55889c9c6000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:21.561367+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 ms_handle_reset con 0x5588a0de6000 session 0x55889f0483c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115646464 unmapped: 21192704 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:22.561609+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115646464 unmapped: 21192704 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:23.561840+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1583931 data_alloc: 234881024 data_used: 19775488
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115646464 unmapped: 21192704 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:24.562044+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9056000/0x0/0x1bfc00000, data 0x2d1469c/0x2e18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115646464 unmapped: 21192704 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6c00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:25.562183+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 ms_handle_reset con 0x5588a0de6c00 session 0x55889f7114a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9056000/0x0/0x1bfc00000, data 0x2d1469c/0x2e18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115490816 unmapped: 21348352 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:26.562428+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115490816 unmapped: 21348352 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:27.562727+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115490816 unmapped: 21348352 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9056000/0x0/0x1bfc00000, data 0x2d14639/0x2e17000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:28.562943+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1582179 data_alloc: 234881024 data_used: 19771392
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115490816 unmapped: 21348352 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:29.563184+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115490816 unmapped: 21348352 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:30.563432+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.815572739s of 10.023037910s, submitted: 49
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 ms_handle_reset con 0x55889d647400 session 0x55889f45d0e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 21331968 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:31.563632+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 21331968 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:32.563834+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 21331968 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:33.564112+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529201 data_alloc: 234881024 data_used: 19771392
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 21331968 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:34.564328+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 21331968 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:35.564520+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 21331968 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:36.564731+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 21331968 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:37.564949+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 21331968 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:38.565156+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529201 data_alloc: 234881024 data_used: 19771392
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 21331968 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 ms_handle_reset con 0x55889f6d0400 session 0x55889f710960
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:39.565301+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 ms_handle_reset con 0x55889f6d0800 session 0x55889f700f00
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 21463040 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:40.565436+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 21454848 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.621417046s of 10.651881218s, submitted: 10
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 ms_handle_reset con 0x5588a0de6000 session 0x55889f704780
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:41.565581+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 21454848 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:42.565736+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 21454848 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:43.565917+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533041 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 21454848 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de7800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:44.566108+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 ms_handle_reset con 0x5588a0de7800 session 0x55889f435680
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889d647400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 21454848 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 ms_handle_reset con 0x55889d647400 session 0x55889d082b40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:45.566299+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 ms_handle_reset con 0x55889f6d0400 session 0x55889e608b40
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115933184 unmapped: 20905984 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:46.566439+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115933184 unmapped: 20905984 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:47.566572+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b8f3e000/0x0/0x1bfc00000, data 0x2e2d639/0x2f30000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115933184 unmapped: 20905984 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:48.566714+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1599632 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115933184 unmapped: 20905984 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:49.566902+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b8f3e000/0x0/0x1bfc00000, data 0x2e2d639/0x2f30000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115933184 unmapped: 20905984 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:50.567078+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115933184 unmapped: 20905984 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:51.567290+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.214493752s of 10.356872559s, submitted: 31
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 ms_handle_reset con 0x55889f6d0800 session 0x55889f3c32c0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x5588a0de6000
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 20889600 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 ms_handle_reset con 0x5588a0de6000 session 0x55889f3ae5a0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:52.567416+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 20889600 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:53.567650+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 20889600 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:54.567859+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 20881408 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:55.568047+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 20881408 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:56.568290+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 20881408 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:57.568599+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 20881408 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:58.568797+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 20881408 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:07:59.568977+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 20881408 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:00.569186+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 20873216 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:01.569402+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 20873216 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:02.569659+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 20873216 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:03.569932+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 20873216 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:04.570067+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 20873216 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:05.570277+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 20873216 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:06.570452+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 20873216 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:07.570662+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 20873216 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:08.570835+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 20873216 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:09.571048+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 20873216 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:10.571235+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 20873216 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:11.571441+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 20873216 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:12.571649+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 20873216 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:13.571849+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 20873216 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:14.572025+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 20873216 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:15.572163+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 20873216 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:16.572352+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:17.572539+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:18.572729+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:19.572883+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:20.573077+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:21.573293+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:22.573470+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:23.573737+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:24.573870+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:25.574107+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:26.574320+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:27.574487+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:28.574806+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:29.575041+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:30.575281+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:31.575519+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:32.575748+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:33.575980+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:34.576181+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:35.576383+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:36.576640+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:37.576800+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:38.577003+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:39.577167+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:40.577343+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:41.577660+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:42.577867+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:43.578136+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:44.578349+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:45.578882+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:46.579090+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:47.579340+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:48.579522+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:49.579654+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:50.579858+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:51.580062+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:52.580234+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:53.580430+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:54.580729+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:55.580928+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:56.581103+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:57.581343+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:58.581514+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:08:59.581756+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:00.581991+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:01.582315+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:02.582502+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:03.582757+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:04.582937+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 20865024 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:05.583064+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 20856832 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:06.583371+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 20856832 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:07.583612+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 20856832 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:08.583895+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 20856832 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:09.584159+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 20856832 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:10.584402+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 20856832 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:11.584608+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 20848640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:12.584771+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 20848640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:13.585084+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 20848640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:14.585333+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 20848640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:15.585653+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 20848640 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:16.585897+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 20840448 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:17.586060+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 20840448 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:18.586253+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 20840448 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:19.586430+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 20840448 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:20.586669+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 20840448 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:21.586850+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 20840448 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:22.587000+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 20840448 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:23.587241+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 20832256 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:24.587424+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 20832256 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:25.587648+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 20832256 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:26.587833+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 20832256 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:27.588627+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 20832256 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:28.588846+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 20824064 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:29.589102+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 20824064 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:30.589298+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 20824064 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:31.589483+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 20824064 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:32.632472+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 20824064 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:33.632728+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 20824064 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:34.633014+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 20824064 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:35.633177+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 20815872 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:36.633381+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 20815872 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:37.633542+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116031488 unmapped: 20807680 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:38.633759+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20799488 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets getting new tickets!
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:39.634066+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _finish_auth 0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:39.635211+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20799488 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:40.634239+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20799488 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:41.634499+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20799488 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:42.634702+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20799488 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:43.634962+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20799488 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:44.635249+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20799488 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:45.635483+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20799488 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:46.635657+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20799488 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:47.635883+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20799488 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:48.636114+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: mgrc ms_handle_reset ms_handle_reset con 0x55889d6c0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/934453051
Jan 22 00:20:24 compute-0 ceph-osd[84656]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/934453051,v1:192.168.122.100:6801/934453051]
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: get_auth_request con 0x5588a0de7800 auth_method 0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: mgrc handle_mgr_configure stats_period=5
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 20725760 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:49.636268+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 ms_handle_reset con 0x55889f3fd800 session 0x55889f34e1e0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889c4f0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 20725760 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:50.636401+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 20725760 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:51.636671+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 20725760 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:52.636841+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 20725760 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:53.637023+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:54.637223+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 20725760 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:55.637411+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 20725760 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:56.637610+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 20725760 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:57.637825+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 20725760 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:58.638016+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 20725760 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:09:59.638142+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 20725760 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:00.638294+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 20725760 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:01.638460+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 20725760 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:02.638609+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 20717568 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:03.638812+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 20717568 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:04.638978+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 20717568 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:05.639116+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 20709376 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:06.639275+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 20709376 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:07.639451+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 20709376 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:08.639629+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 20709376 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:09.639779+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 20709376 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:10.639945+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 20709376 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:11.640120+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 20709376 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:12.640266+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 20709376 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:13.640514+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 20709376 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:14.640710+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 20701184 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:15.640892+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 20701184 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:16.641059+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 20701184 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:17.641189+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 20701184 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:18.641341+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 20701184 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:19.641594+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 20692992 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:20.641806+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 20692992 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:21.642001+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 20692992 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:22.642176+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 20692992 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:23.642383+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 20692992 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:24.642625+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 20692992 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:25.642843+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 20684800 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:26.643038+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 20684800 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:27.643212+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 20684800 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:28.643403+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 20684800 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:29.643600+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 20684800 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:30.643789+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 20684800 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:31.644216+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 20684800 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:32.644386+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 20684800 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:33.644674+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 20676608 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:34.644873+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 20676608 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:35.645102+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 20676608 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:36.645370+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 20676608 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:37.645584+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 20676608 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:38.645730+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 20676608 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:39.645921+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 20676608 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:40.646160+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 20676608 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:41.646393+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 20668416 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:42.646655+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 20668416 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:43.646803+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 20668416 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:44.646955+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 20668416 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:45.647146+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 20668416 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:46.647343+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 20668416 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:47.647510+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 20668416 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:48.647689+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 20668416 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:49.647833+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 20668416 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:50.647982+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 20668416 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:51.648202+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 20668416 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:52.648376+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 20660224 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:53.648689+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 20660224 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:54.648884+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 20660224 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:55.649103+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 20660224 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:56.649311+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 20660224 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:57.649491+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 20660224 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:58.649714+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 20660224 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:10:59.649930+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 20660224 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:00.650212+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 20652032 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:01.650407+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 20652032 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:02.650611+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 20652032 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:03.650822+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 20652032 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:04.651096+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 20643840 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:05.651331+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 20643840 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:06.651509+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 20643840 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:07.651695+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 20643840 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:08.651854+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 20643840 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:09.652031+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 20643840 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:10.652196+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 20643840 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:11.652421+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 20643840 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:12.652593+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 20635648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:13.652840+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 20635648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:14.653123+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 20635648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:15.653264+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 20635648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:16.653444+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 20635648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:17.653695+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 20635648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:18.653954+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 20635648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:19.654163+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 20635648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:20.654382+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 20635648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:21.654627+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 20635648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:22.654893+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 20635648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:23.655174+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 20635648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:24.655417+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 20635648 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:25.655583+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 20627456 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:26.655775+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 20627456 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:27.655959+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 20627456 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:28.656120+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 20619264 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:29.656326+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 20619264 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:30.656615+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 20619264 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:31.656829+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 20619264 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:32.657060+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 20619264 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:33.657284+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 20619264 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:34.657498+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 20619264 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:35.657674+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 20619264 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:36.657875+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 20619264 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:37.658038+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 20619264 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:38.658237+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 20611072 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:39.658439+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 20611072 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:40.658606+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 20611072 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:41.658792+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 20611072 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:42.658984+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 20611072 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:43.659181+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 20611072 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:44.659345+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 20602880 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:45.659677+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 20602880 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:46.659874+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 20602880 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:47.660070+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 20602880 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:48.660285+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 20602880 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:49.660451+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 20602880 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:50.660641+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 20602880 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:51.660807+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 20602880 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:52.661045+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 20594688 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:53.661295+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 20594688 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:54.661511+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 20594688 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:55.661684+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 20594688 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:56.661870+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 20594688 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:57.662069+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 20594688 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:58.662242+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 20594688 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:11:59.662453+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 20594688 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:00.662648+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 20586496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:01.662830+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 20586496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:02.663028+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 20586496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:03.663207+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 20586496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:04.663403+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 20586496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:05.663710+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 20586496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:06.664006+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 20586496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:07.664184+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 20586496 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:08.664316+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 20578304 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:09.664476+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 20578304 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:10.664616+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 20578304 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:11.664821+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 20578304 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:12.665024+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 20578304 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:13.665235+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 20578304 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:14.665424+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 20578304 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:15.665619+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 20578304 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:16.665820+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 20578304 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:17.665994+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 20578304 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:18.666160+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 20578304 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:19.666359+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 20570112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:20.666524+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 20570112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:21.666776+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 20570112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:22.666946+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 20570112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:23.667232+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 20570112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:24.667376+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 20570112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:25.667599+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 20570112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:26.667845+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 20570112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:27.668043+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 20570112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:28.668274+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 20570112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:29.668516+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 20570112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:30.668696+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 20570112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:31.668908+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 20570112 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:32.669088+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 20561920 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:33.669340+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 20561920 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:34.669517+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 20561920 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:35.669687+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 20561920 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:36.669842+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 20561920 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:37.670057+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 20553728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:38.670238+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 20553728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:39.670476+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 20553728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:40.670717+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 20553728 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:41.670853+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 20545536 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:42.671026+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 20545536 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:43.671279+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 20545536 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:44.671482+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 20545536 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:45.671666+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 20545536 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:46.671882+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 20545536 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:47.672085+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 20545536 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:48.672333+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 20545536 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:49.672533+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 20545536 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:50.672758+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 20545536 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:51.672937+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 20545536 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:52.673095+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 20537344 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:53.673390+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 20537344 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:54.673676+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 20537344 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:55.673880+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 20537344 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:56.674059+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 20537344 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:57.674278+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 20537344 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:58.674486+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 20537344 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:12:59.674665+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 20537344 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:00.674844+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 20537344 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:01.675013+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 20537344 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:02.675194+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 20537344 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:03.675409+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 20529152 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:04.675580+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 20520960 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:05.675717+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 20520960 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:06.675892+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 20520960 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:07.676078+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 20520960 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:08.676223+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 20520960 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:09.676351+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 20520960 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:10.676487+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 20520960 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:11.676875+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 20520960 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:12.677097+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 20520960 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:13.677329+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 20520960 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:14.677598+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 20520960 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:15.677768+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 20520960 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:16.677966+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 20520960 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:17.678160+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 20512768 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:18.678329+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 20512768 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:19.678624+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 20512768 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:20.678828+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 20512768 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:21.679018+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 20512768 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:22.679239+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 20512768 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:23.679654+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 20512768 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:24.679840+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 20512768 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:25.680017+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 20512768 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:26.680154+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 20512768 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:27.680317+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 20512768 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:28.680471+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 20512768 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:29.680615+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 20512768 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:30.680741+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 20512768 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:31.680874+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 20504576 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:32.681019+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 20496384 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:33.681221+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 20496384 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:34.681372+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 20496384 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:35.681629+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 20496384 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:36.681830+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 20496384 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:37.681990+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 20496384 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:38.682163+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 20496384 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:39.682384+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 20496384 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:40.682620+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 20496384 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:41.682776+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 20496384 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:42.682917+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 20496384 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:43.683130+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 20488192 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:44.683289+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 20488192 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:45.683515+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 20488192 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:46.683708+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 20488192 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:47.683864+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 20488192 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:48.684063+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 20488192 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:49.684252+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 20488192 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:50.684411+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 20488192 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:51.684575+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 20488192 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:52.684774+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 20480000 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:53.684974+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 20480000 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:54.685147+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 20480000 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:55.685365+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 20480000 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:56.685616+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20471808 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:57.685800+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20471808 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:58.685930+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20471808 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:13:59.686077+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20471808 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:00.686234+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20471808 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:01.686369+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20471808 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:02.686669+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20471808 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:03.686914+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20471808 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:04.687094+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20471808 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:05.687227+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20471808 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:06.687383+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20471808 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:07.687503+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20471808 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:08.687666+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20463616 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:09.687836+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20463616 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:10.687961+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20463616 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:11.688101+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20463616 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:12.688258+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20463616 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:13.688495+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20463616 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:14.688633+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537994 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20463616 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:15.688772+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20463616 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:16.688902+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f3fd800
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 384.986602783s of 385.082641602s, submitted: 25
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 ms_handle_reset con 0x55889f3fd800 session 0x55889ecde780
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20463616 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:17.689079+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b975f000/0x0/0x1bfc00000, data 0x260b69b/0x270f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20463616 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:18.689308+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b975f000/0x0/0x1bfc00000, data 0x260b69b/0x270f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b975f000/0x0/0x1bfc00000, data 0x260b69b/0x270f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 20463616 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:19.689466+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1539776 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 20455424 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:20.689730+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 20455424 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:21.689912+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 20455424 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:22.690103+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: handle_auth_request added challenge on 0x55889f6d0400
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 20455424 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:23.690296+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 ms_handle_reset con 0x55889f6d0400 session 0x55889f354780
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 20455424 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:24.690480+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 20455424 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:25.690657+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 20455424 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:26.690819+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 20455424 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:27.690940+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 20455424 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:28.691785+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 20455424 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:29.691934+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 20455424 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:30.692072+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:31.692290+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 20455424 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:32.692479+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 20455424 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:33.692681+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 20447232 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:34.692835+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 20447232 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:35.692978+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 20447232 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:36.693108+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 20447232 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:37.693285+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 20447232 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:38.693444+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 20447232 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 41K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 12K writes, 4138 syncs, 3.14 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1488 writes, 3861 keys, 1488 commit groups, 1.0 writes per commit group, ingest: 1.74 MB, 0.00 MB/s
                                           Interval WAL: 1488 writes, 690 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:39.693617+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 20447232 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:40.693811+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 20447232 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:41.694003+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 20447232 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:42.694136+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 20447232 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:43.694284+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 20447232 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:44.694475+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 20447232 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:45.694615+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 20447232 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:46.694743+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 20447232 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:47.694892+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 20447232 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:48.695115+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 20439040 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:49.695308+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 20439040 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:50.695534+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 20439040 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:51.695756+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 20439040 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:52.695973+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 20439040 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:53.696170+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 20439040 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:54.696409+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 20439040 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:55.696587+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 20439040 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:56.696810+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 20439040 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:57.696978+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 20439040 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:58.697178+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 20439040 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:14:59.697340+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 20439040 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:00.697528+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 20439040 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:01.697718+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 20430848 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:02.697885+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 20422656 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:03.698093+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 20422656 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:04.698269+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 20422656 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:05.698471+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 20422656 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:06.698692+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 20422656 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:07.698802+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 20422656 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:08.699020+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 20422656 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:09.699182+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 20422656 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:10.699346+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 20422656 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:11.699536+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 20422656 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:12.699803+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 20422656 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:13.699992+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 20422656 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:14.700169+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 20422656 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:15.700339+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 20422656 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:16.700501+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 20414464 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:17.700690+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 20414464 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:18.700906+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 20414464 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:19.701075+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 20414464 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:20.701254+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 20414464 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:21.701423+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 20414464 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:22.701665+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 20414464 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:23.701919+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 20414464 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:24.702064+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 20414464 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:25.702342+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 20414464 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:26.702528+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 20414464 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:27.702719+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 20414464 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:28.702892+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:29.703023+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:30.703266+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:31.703441+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:32.703674+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:33.703904+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:34.704087+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:35.704243+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:36.704666+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:37.704880+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:38.705024+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:39.705181+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:40.705324+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:41.705476+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:42.705642+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:43.705795+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:44.705969+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:45.706120+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:46.706250+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:47.706411+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 20406272 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:48.706548+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:49.706767+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:50.706940+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:51.707107+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:52.707256+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:53.707438+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:54.707622+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:55.707849+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:56.708023+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:57.708168+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:58.708334+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:15:59.708485+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:00.708685+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:01.708912+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:02.709112+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:03.709290+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:04.709461+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:05.709645+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:06.709852+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:07.710062+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:08.710240+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:09.710426+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:10.710590+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:11.710785+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 20398080 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:12.711003+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 20389888 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:13.711265+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 20389888 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:14.711506+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 20389888 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:15.711750+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 20389888 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:16.711965+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 20389888 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:17.712180+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 120.956542969s of 121.008110046s, submitted: 14
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 20389888 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:18.712334+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 20324352 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:19.712451+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 20135936 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:20.712646+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 20111360 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:21.712855+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 20111360 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:22.713019+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 20111360 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:23.713230+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 20111360 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:24.713400+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 20111360 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:25.713628+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 20111360 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:26.713809+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 20111360 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:27.714034+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 20111360 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:28.714189+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 20111360 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:29.714371+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 20111360 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:30.714520+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 20111360 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:31.714692+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 20111360 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:32.714849+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 20111360 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:33.715053+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 20111360 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:34.715230+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20103168 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:35.715403+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20103168 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:36.715609+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20103168 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:37.715793+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20103168 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:38.715930+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20103168 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:39.716075+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20103168 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:40.716234+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 20094976 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:41.716417+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 20094976 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:42.716653+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 20094976 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:43.716949+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 20094976 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:44.717148+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 20086784 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:45.717324+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 20086784 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:46.717489+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 20086784 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:47.717655+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 20086784 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:48.717863+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 20086784 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:49.718052+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 20086784 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:50.718180+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 20086784 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:51.718386+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 20086784 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:52.718620+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 20078592 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:53.718808+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 20078592 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:54.718974+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 20078592 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:55.719149+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 20078592 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:56.719289+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 20078592 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:57.719438+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:58.720097+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:16:59.720328+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:00.720481+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:01.720710+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:02.720908+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:03.721086+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:04.721267+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:05.721457+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:06.721649+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:07.721834+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:08.722033+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:09.722395+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:10.722588+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:11.722790+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:12.722927+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:13.723179+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:14.723420+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:15.723637+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:16.723856+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:17.724062+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:18.724250+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:19.724459+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:20.727520+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:21.727724+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:22.727975+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 20070400 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:23.728196+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:24.728381+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:25.728638+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:26.728902+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:27.729177+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:28.729386+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:29.729551+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:30.729859+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:31.730045+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:32.730274+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:33.730488+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:34.730647+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:35.730832+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:36.730997+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:37.731153+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:38.731425+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:39.731615+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:40.731802+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:41.731983+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:42.732197+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:43.732443+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:44.732820+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:45.732981+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:46.733194+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:47.733387+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:48.733612+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:49.733799+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:50.734018+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:51.734209+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 20062208 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:52.734395+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:53.734627+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:54.734805+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:55.734998+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:56.735230+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:57.735459+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:58.735749+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:17:59.736010+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:00.736374+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:01.736527+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:02.736680+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:03.737625+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:04.737785+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:05.737913+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:06.738079+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:07.738234+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:08.738397+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:09.738554+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20054016 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:10.738898+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:11.739220+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:12.739476+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:13.739687+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:14.739844+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:15.739960+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:16.740109+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:17.740286+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:18.740507+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:19.740669+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:20.740852+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:21.741038+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:22.741206+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:23.741398+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:24.741654+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:25.741830+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:26.741993+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:27.742177+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:28.742347+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:29.742524+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:30.742721+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:31.742868+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20045824 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:32.743003+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:33.743212+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:34.743402+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:35.743549+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:36.743743+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:37.743995+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:38.744198+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:39.744397+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:40.744631+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:41.744843+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:42.745039+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:43.745286+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:44.745594+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:45.745808+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:46.746048+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:47.746256+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:48.746447+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:49.746616+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20037632 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:50.746764+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:51.746988+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:52.747213+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:53.747478+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:54.747702+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:55.747841+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:56.748006+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:57.748141+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:58.748326+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:18:59.748631+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:00.748838+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:01.749099+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:02.749362+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:03.749593+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:04.749767+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:05.749929+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:06.750168+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:07.750356+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:08.750521+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:09.750696+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:10.750843+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:11.751013+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:12.751210+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 20029440 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:13.751603+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 20021248 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:14.751763+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20013056 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:15.751934+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20013056 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:16.752116+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20013056 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:17.752283+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20013056 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:18.752435+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20013056 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:19.752647+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20013056 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:20.752897+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20013056 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:21.753115+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20013056 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:22.753267+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20013056 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:23.753448+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20013056 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:24.753659+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20013056 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:25.753798+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20013056 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:26.753974+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20013056 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:27.754179+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20013056 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:28.754352+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:29.754498+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:30.754671+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:31.754920+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:32.755119+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:33.755325+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:34.755476+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:35.755709+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:36.755886+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:37.756167+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:38.756448+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:39.756670+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:40.756937+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 234881024 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:41.757283+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:42.757513+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:43.757847+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:44.758131+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:45.758354+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 218103808 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:46.758537+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:47.758727+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:48.758853+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:49.758986+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:50.759722+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20004864 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 00:20:24 compute-0 ceph-osd[84656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 00:20:24 compute-0 ceph-osd[84656]: bluestore.MempoolThread(0x55889afc7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538903 data_alloc: 218103808 data_used: 19709952
Jan 22 00:20:24 compute-0 ceph-osd[84656]: do_command 'config diff' '{prefix=config diff}'
Jan 22 00:20:24 compute-0 ceph-osd[84656]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:51.759869+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: do_command 'config show' '{prefix=config show}'
Jan 22 00:20:24 compute-0 ceph-osd[84656]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 22 00:20:24 compute-0 ceph-osd[84656]: do_command 'counter dump' '{prefix=counter dump}'
Jan 22 00:20:24 compute-0 ceph-osd[84656]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 19578880 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: do_command 'counter schema' '{prefix=counter schema}'
Jan 22 00:20:24 compute-0 ceph-osd[84656]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:52.759982+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b9760000/0x0/0x1bfc00000, data 0x260b639/0x270e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x3d8f9c7), peers [0,2] op hist [])
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 19652608 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: tick
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_tickets
Jan 22 00:20:24 compute-0 ceph-osd[84656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T00:19:53.760128+0000)
Jan 22 00:20:24 compute-0 ceph-osd[84656]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 19341312 heap: 136839168 old mem: 2845415832 new mem: 2845415832
Jan 22 00:20:24 compute-0 ceph-osd[84656]: do_command 'log dump' '{prefix=log dump}'
Jan 22 00:20:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 22 00:20:24 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/461218030' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 00:20:24 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28058 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:24 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28015 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:24 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18255 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:24 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 22 00:20:24 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/129383719' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 00:20:24 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28082 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 22 00:20:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3512600538' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28033 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 00:20:25 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18279 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 22 00:20:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3756391422' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: pgmap v1882: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.28013 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.18201 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.27967 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.28037 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.18219 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.28043 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.28000 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.18231 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/149489986' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1179386280' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/461218030' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.28058 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.28015 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.18255 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3729723151' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/129383719' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.28082 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3512600538' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.28033 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.18279 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3421019373' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28100 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28060 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18306 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 22 00:20:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3180125879' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28115 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:25 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:25 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:25.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 00:20:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2232100980' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 00:20:25 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2232100980' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:20:25 compute-0 crontab[290135]: (root) LIST (root)
Jan 22 00:20:25 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28072 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:25 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18330 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28139 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:26 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:26 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:26 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:26.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:26 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28102 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mon[74318]: pgmap v1883: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3756391422' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mon[74318]: from='client.28100 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mon[74318]: from='client.28060 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mon[74318]: from='client.18306 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3180125879' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2853292338' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3422351805' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mon[74318]: from='client.28115 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2232100980' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.10:0/2232100980' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mon[74318]: from='client.28072 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mon[74318]: from='client.18330 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3521615554' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2349812192' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 22 00:20:26 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3531297066' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28154 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18372 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mgr[74614]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 00:20:26 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-22T00:20:26.698+0000 7fbf53a93640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 00:20:26 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28123 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:26 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 22 00:20:26 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1291046405' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28144 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 22 00:20:27 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1727343191' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 22 00:20:27 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2730976011' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:27 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28175 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mgr[74614]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 00:20:27 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-22T00:20:27.385+0000 7fbf53a93640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 00:20:27 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28156 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 22 00:20:27 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2619154974' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 22 00:20:27 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:27 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:20:27 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:27.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:20:27 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 22 00:20:27 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/387765761' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mon[74318]: from='client.28139 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mon[74318]: from='client.28102 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3531297066' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/675322282' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mon[74318]: from='client.28154 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1752841294' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mon[74318]: from='client.18372 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mon[74318]: from='client.28123 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1291046405' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/225562150' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mon[74318]: from='client.28144 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1727343191' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2730976011' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 22 00:20:27 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/313406312' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 00:20:28 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:28 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:28 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:28.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 22 00:20:28 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3649146790' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28201 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-3759241a-7f1c-520d-ba17-879943ee2f00-mgr-compute-0-boqcsl[74610]: 2026-01-22T00:20:28.334+0000 7fbf53a93640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 00:20:28 compute-0 ceph-mgr[74614]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 22 00:20:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 22 00:20:28 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/278810494' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 22 00:20:28 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2458068052' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 22 00:20:28 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2421813911' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: pgmap v1884: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.28175 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.28156 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2619154974' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/387765761' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2046059740' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/503046661' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2114723541' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1050202931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3649146790' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/278810494' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2100603983' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3351382530' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/595229395' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2458068052' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2421813911' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1550963285' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2251177064' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2250908659' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 22 00:20:28 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 22 00:20:28 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1941390639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:20:29 compute-0 systemd[1]: Starting Hostname Service...
Jan 22 00:20:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 22 00:20:29 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4094719697' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 00:20:29 compute-0 systemd[1]: Started Hostname Service.
Jan 22 00:20:29 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 22 00:20:29 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2021632632' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 22 00:20:29 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1297532501' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 22 00:20:29 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2903149426' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 22 00:20:29 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:29 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 00:20:29 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:29.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 00:20:29 compute-0 ceph-mon[74318]: from='client.28201 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1941390639' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1746224603' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1515169008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/4094719697' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/875989417' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1351583816' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3964630732' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2021632632' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/4055251361' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1297532501' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3091413547' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/880553931' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3956372751' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2903149426' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 22 00:20:29 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2026905844' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 00:20:30 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 22 00:20:30 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4277655202' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 00:20:30 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18561 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:30 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:30 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:30 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:30.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:30 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18567 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:30 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18585 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:30 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18591 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mon[74318]: pgmap v1885: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/4277655202' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1871584847' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/776583562' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/718384016' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3894795235' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3532629654' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3647579398' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1003079868' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2674110319' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/4275268275' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3275090814' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/874245165' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18609 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 22 00:20:31 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/550657979' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28325 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28331 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:31 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28337 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 22 00:20:31 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2650299289' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28346 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28357 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28352 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28366 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18654 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:31 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:31 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 00:20:31 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:31.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 00:20:31 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28370 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:31 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 22 00:20:31 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/884018680' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28381 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mon[74318]: from='client.18561 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mon[74318]: from='client.18567 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mon[74318]: from='client.18585 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mon[74318]: from='client.18591 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mon[74318]: from='client.18609 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/550657979' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/500616152' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2650299289' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/884018680' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28387 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18669 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:32 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:32 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:32 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:32.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:32 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28385 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 22 00:20:32 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1903566229' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28405 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18687 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28397 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 22 00:20:32 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1791638569' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 22 00:20:32 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 00:20:32 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 00:20:32 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28420 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:32 compute-0 nova_compute[247516]: 2026-01-22 00:20:32.992 247523 DEBUG oslo_service.periodic_task [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 00:20:32 compute-0 nova_compute[247516]: 2026-01-22 00:20:32.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 00:20:32 compute-0 nova_compute[247516]: 2026-01-22 00:20:32.993 247523 DEBUG nova.compute.manager [None req-35363235-05e3-4827-901c-be4705a2e663 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 00:20:33 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 00:20:33 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28409 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:33 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28438 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='client.28325 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='client.28331 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: pgmap v1886: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='client.28337 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='client.28346 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='client.28357 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='client.28352 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='client.28366 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='client.18654 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='client.28370 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='client.28381 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='client.28387 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1903566229' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1465125323' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1791638569' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1835909745' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 00:20:33 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3483871777' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 22 00:20:33 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28474 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:33 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:33 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:33 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:33.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:33 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28421 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 00:20:34 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28486 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:34 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:34 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:34 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:34.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:34 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 22 00:20:34 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3546764780' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='client.18669 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='client.28385 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='client.28405 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='client.18687 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='client.28397 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='client.28420 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='client.28409 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 00:20:34 compute-0 ceph-mon[74318]: pgmap v1887: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/907975305' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='client.28438 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='client.28474 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/3510363819' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='client.28421 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/685839086' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='client.28486 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2095555423' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/3546764780' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 00:20:34 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1638133826' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 00:20:34 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 00:20:34 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18786 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 00:20:34 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 00:20:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Jan 22 00:20:35 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4265772660' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 22 00:20:35 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='client.18786 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/4265772660' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 00:20:35 compute-0 ceph-mon[74318]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 00:20:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Jan 22 00:20:35 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2650450087' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 22 00:20:35 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:35 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:35 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.102 - anonymous [22/Jan/2026:00:20:35.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:35 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28588 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:35 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Jan 22 00:20:35 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2562136583' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 22 00:20:36 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.28526 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:36 compute-0 radosgw[92982]: ====== starting new request req=0x7fd6bf6296f0 =====
Jan 22 00:20:36 compute-0 radosgw[92982]: ====== req done req=0x7fd6bf6296f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 00:20:36 compute-0 radosgw[92982]: beast: 0x7fd6bf6296f0: 192.168.122.100 - anonymous [22/Jan/2026:00:20:36.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 00:20:36 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Jan 22 00:20:36 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2075435996' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 22 00:20:36 compute-0 ceph-mon[74318]: pgmap v1888: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:36 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/1930333066' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 22 00:20:36 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2650450087' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 22 00:20:36 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/293799026' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 22 00:20:36 compute-0 ceph-mon[74318]: from='client.28588 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:36 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2562136583' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 22 00:20:36 compute-0 ceph-mon[74318]: from='client.28526 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:36 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/4222799946' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 22 00:20:36 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/2075435996' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 22 00:20:36 compute-0 ceph-mgr[74614]: log_channel(audit) log [DBG] : from='client.18864 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:37 compute-0 ceph-mon[74318]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Jan 22 00:20:37 compute-0 ceph-mon[74318]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1002973180' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 22 00:20:37 compute-0 ceph-mgr[74614]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 41 MiB data, 249 MiB used, 21 GiB / 21 GiB avail
Jan 22 00:20:37 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/1386051959' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 22 00:20:37 compute-0 ceph-mon[74318]: from='client.18864 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 00:20:37 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/487509582' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 22 00:20:37 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/2157342974' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 22 00:20:37 compute-0 ceph-mon[74318]: from='client.? 192.168.122.102:0/2772965846' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 22 00:20:37 compute-0 ceph-mon[74318]: from='client.? 192.168.122.100:0/1002973180' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 22 00:20:37 compute-0 ceph-mon[74318]: from='client.? 192.168.122.101:0/3193160883' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
